Using backstage - docker

I'd like to attempt to start using Spotify's Backstage at my job, but, I'm a bit clueless on how to start exactly...
I currently want to use it for two main tasks we have in our infrastructure - I want to add documentation to the Backstage platform, it can be even handwritten markdown files that will get centralized there - seems like it supports MkDocs so this should be doable...
Then, I want to embed in the Backstage platform a custom script/docker image we use to create DB replicas for local development.
Essentially, I want to start unifying in a central location some of our infrastructure and documentation concerns which I think is exactly what Backstage aims to solve.
I've managed to download and start the docker image. But, any more practical hands-on tutorials on how to embed and run custom docker images from the platform together with some docs?

Related

Does developing using RoboMaker tie me tightly to AWS?

I am in the process of learning ROS and Gazebo for a project and saw that Amazon's Robomaker seems to be a great environment to get something up and working quickly. My concern is if I design and develop my project in Robomaker, that it will be tough to move over to my own servers should the time come.
Has anyone had any experience with this? Is it pretty simple to move over to another solution/infrastructure or are you married to AWS once you go down the rabbit hole?
Do they demand ownership of your IP?
Basically, if it is "normal ROS", there's a robot encoded in an URDF file, there's a bunch of standard ROS libraries (gazebo, controller, robot_localization, amcl, etc.), all strung together by a bunch of launch files (aka xml startup scripts).
They can't stop you from using this on your own system.
What they really offer, if you look at what they're pitching (https://aws.amazon.com/robomaker/), is a bunch of custom software, chiefly useful for running a fleet/multiple of robots in an industrial setting, involving human interaction / voice, data analytics and monitoring.
For most people learning, if you were to even touch this (you can get entirely on your own on a native ubuntu or ubuntu virtualbox), their simulation environment may be nicer out-of-the-box (I haven't played with it). The rest of the benefits, to translate them, is to have robots with data analytics and fancy AI+web tools, and then to do fleet lifecycle management.
You are most certainly not locked in - just make sure you want what they offer before shelling out anything.
In general, is it also a good idea to put your application in containers using docker o AWS ECR. So you can deploy your app in any other environment using a CI/CD pipeline, including the edge computer.

Is it feasible to have one docker image for an already existing application with multiple dependencies

I am new to Docker and want to learn the ropes with real-life challenges.
I have an application hosted on IIS and has dependencies over SQL Express and SOLR.
I want to understand the following:
Is it possible to have my whole set-up, including of enabling IIS,
SQL, SOLR and my application in one single container?
If point 1 is feasible, how should I start with it?
Sorry if my questions are basics.
It is feasible, just not a good practice. You want to isolate the software stack to improve the mantainability (easier to deploy updates), modularity (you can reuse a certain component in a different project and even have multiple projects reusing the same image) and security (a software vulnerability in a component of the stack will hardly be able to reach a different component).
So, instead of putting all together into the same image, I do recommend using Docker Compose to have multiple images for each component of the stack (you can even pull generic, up-to-date images from Docker Hub) and assemble them up from the Compose file, so with a single command you can fire up all the components needed for your application to work.
That being said, it is feasible to have all the stack together into the same Dockerfile, but it will be an important mess. You'll need a Dockerfile that installs all the software required, which will make it bulky and hard to mantain. If you're really up for this, you'll have to start from a basic OS image (maybe Windows Server Core IIS) and from there start installing all the other software manually. If there are Dockerfiles for the other components you need to install and they share the same base image or a compatible one, you can straight copy-paste the contents into your Dockerfile, at the cost of said mantainability.
Also, you should definitely use volumes to keep your data safe, especially if you take this monolithic approach, since you risk losing data from the database otherwise.
TL;DR: yes, you can, but you really don't want to since there are much better alternatives that are almost as hard.

Do OS providers make special / custom made OS for docker?

I am trying to understand Docker and its related core concepts, I came to know that there is concept of images which forms the basis of container where applications run isolated.
I also came to know that we can download the official images from docker hub, https://hub.docker.com , part of screen shot below:
My question is:
Do respective company create special/custom made OS (the minimal, for example we can see ubuntu image) for docker? If so, what benefit these companies get in creating these custom made images for docker?
One could call them custom images, however, they are just base bare images which are to be used as a starting point for your application.
They are mostly built by people who works at Docker and they are trying to ensure some guarantee of quality.
They are stripped of unnecessary packages in order to keep the image size to a minimum.
To find out more you could read this Docker documentation page or this blog post.

Is it possible to run an ImageJ script on heroku?

I want to be able to upload images to a server through something like a RESTful API, which then processes the image using an ImageJ script and then returns numerical results (no image is returned)
Can this be done easily? and can it be built on Heroku?
ImageJ works well headless; see http://imagej.net/Headless. So you could certainly build a server application which uses ImageJ based on any of several different Java server platforms (JBoss, Glassfish, Jersey, etc.). Especially if you leverage the new ImageJ2 API, there is a good separation of concerns which would allow you to build an effective client-server application.
To my knowledge, however, no one has created such a thing yet. I started playing with the idea using Jersey, but there is nothing really there right now, nor any time to work on it. Contributions in this area would be most welcome!
See also the ImageJ-OMERO project, which exposes ImageJ commands (including scripts/macros) as OMERO scripts which run on the server side.

How to install Dart in Google Compute Engine (GCE)?

I am trying to learn both Dart and GCE. I already created a server at GCE, but I don't know how to install Dart since I can only use Linux commands on the Debian server.
This is mostly about Dart on AppEngine:
You should be able to find all information here https://www.dartlang.org/cloud/
I did it using the instructions from this page and the linked pages at the bottom.
The discussions here https://groups.google.com/a/dartlang.org/forum/#!forum/cloud provide some useful bits too.
Dart on Compute engine:
Here is a blog post that covers it pretty good http://financecoding.github.io/blog/2013/09/30/getting-started-with-dart-on-compute-engine/ but some things have changed since this was written.
There are docker images https://github.com/dart-lang/dart_docker ready to execute Dart scripts.
Just search for information how to use Docker images with GCE and you should be fine (there should already be a lot available)
Please ask again if you encounter concrete problems.
Dart on AppEngine runs as Managed VM. Managed VMs work differently than real AppEngine VMs that run 'native' supported languages like Java, Python, and Go. Managed VMs are in fact Compute engines but managed by AppEngine. This means they are launched and shut down depending on the load (and depending on some basic configuration settings in app.yaml and also depending on payment settings) while Compute Engines instances are basically always on. You have to manage yourself when instances should be added/removed depending on the load. There is Kubernetes which is a handy tool to make this easier but you have to actually manage your instances. Besides from that there is not much difference between Managed VMs and Compute Engine instances. A difference to native AppEngine is that you can add any libraries and also binaries to Managed VMs like to CE.
There are pricing differences but I don't know details about this myself yet

Resources