For VM and Containers (Docker), we can use logmet service (logging and metrics) as described in the Bluemix documentation. I wonder if we can use this service for Cloud Foundry app or not using log drain ( https://docs.cloudfoundry.org/devguide/services/log-management.html ).
Ref: https://developer.ibm.com/bluemix/2015/12/11/sending-logs-to-bluemix-using-logstash-forwarder/
For Cloud Foundry applications the Monitoring & Analytics service in the catalog provides similar functionality.
Related
We have an existing service mesh built using Envoy and internal service control and discovery stack. We want to offer cloud run to our developers. How can we integrate the cloud run into the mesh network so that:
1, The cloud run containers can talk to the mesh services.
2, The services built using cloud run can be discovered and used by other mesh services (each has a Envoy sidecar)
The GCP docs cover this with the Cloud Run for Anthos services using Istio doc.
In a nutshell, you will need to:
Create a GKE cluster with Cloud Run enabled.
Deploy a sample service to Cloud Run for Anthos on Google Cloud.
Create an IAM service account.
Create an Istio authentication policy.
Create an Istio authorization policy.
Test the solution.
But things change depending on how your existing service mesh is configured. Elaborating on that portion can allow the community to be able to better assist you.
As per my knowledge Nimbella is a serverless cloud platfrom which allows a developer to deploy their application in any public cloud platform since it has a cloud agnostic nature and thereby avoid vendor lock-in.
"Nimbella is cloud-agnostic and can run on public and private clouds thus naturally supporting a hybrid or multi-cloud strategy. As a developer, you can code once and run on all clouds or your local machine, because you can deploy the Nimbella platform anywhere." -(from Nimbella official document.)
So my question is,I didn't see any area which connect the application in Nimbella with any of the public cloud services. How can we deploy the application in Nimbella in any of the public cloud services(AWS,Firebase)?
Nimbella's serverless functions are powered by Apache OpenWhisk (github, web) which makes it possible to run your serverless code on any cloud that is powered by OpenWhisk - in addition to Nimbella, there are other cloud providers that offer OpenWhisk as a service: IBM and Adobe I/O Runtime. There is also a "Nimbella lite" Digital Ocean droplet. Nimbella can be deployed on-prem or a cloud of your choice but this is currently offered as an enterprise feature.
Serverless functions for OpenWhisk have one of the purest function signature: JSON dictionary -> JSON dictionary, and the programming model is consistent across languages supported by the project. Additionally, one can run a Container as a function, so that it less necessary for a developer to make a distinction between function and container. The programming model for serverless functions developed for OpenWhisk make them highly portable as well.
Is it possible to connect an external worker that is not part of the Cloud Composer Kubernetes cluster? Use case would be connecting a box in a non-cloud data center to a Composer cluster.
Hybrid clusters are not currently supported in Cloud Composer. If you attempt to roll your own solution on top of Composer, I'd be very interested in hearing what did or didn't work for you.
I have a docker-compose.yml file which have environment variable and certificates. I like to deploy these in cloud foundry dev version.
I want to deploy microgateway on cloud foundry link for microgateway is below-
https://github.com/CAAPIM/Microgateway
In cloud native world, you instantiate the services to your foundation beforehand. You can use prebuilt services (auto-scaler) available from the market place.
If the service you want is not available, you can install a tile (e.g redis, mysql, rabbitmq), which will add services to the market place. Lot of vendors provide tiles that can be installed on PCF (check on newtork.pivotal.io for the full list).
If you have services that are outside of cloud foundry (e.g. Oracle, Mongo, or MS Sql Server), and you wish to inject them into your cloud foundry foundation, you can create do that by creating User Provide Services (cups).
Once you have a service, you have to create a service instance. Think of it as provisioning a service for you. After you have provisioned i.e. created a service instance, then you can bind it to one or more apps.
A service instance is scoped to an org and a space. All apps within a org - space, can be bound to that service instance.
You deploy your app individually, by itself, to cloud foundry (jar, war, zip). You then bind any needed services to your app (e.g db, scaling, caching etc).
Use a manifest file to do all these steps in one deployment.
PCF 2.0 is introducing PKS - Pivotal Container Service. It is implementation of Kubo within PCF. It is still not GA.
Kubo, Kubernetes, and PKS allow you to deployed your containerized applications.
I have played with MiniKube and little bit of Kubo. Still getting my hands wet on PKS.
Hope this helps!
Currently I can see that Spring Cloud Data Flow has these servers: Local, YARN, Cloud Foundry, Mesos, and Kubernetes; is there any plan for Swarm support?
Caleb: Spring Cloud Data Flow's deployer implementation is based on Spring Cloud Deployer's service provider interface (SPI), so there's currently SPI implementation for Local, CF, Yarn, Kubernetes, and Mesos. These implementations are managed in separate repos, too.
This decoupling provides flexibility and it is easy to extend to add new deployers. Though we haven't attempted to add Docker Swarm deployer yet, we would love to review contributions from the community.