I am using PCF and try to bulk /single import application using http url and found network is blocking extrnal http, is there option to upload my task jar without adding into http ?
Following url i am try to import
http://repo.spring.io/libs-snapshot/org/springframework/cloud/stream/app/spring-cloud-stream-app-descriptor/Celsius.BUILD-SNAPSHOT/spring-cloud-stream-app-descriptor-Celsius.BUILD-SNAPSHOT.stream-apps-kafka-10-maven
http://repo.spring.io/libs-release-local/org/springframework/cloud/stream/app/spring-cloud-stream-app-descriptor/Celsius.SR3/spring-cloud-stream-app-descriptor-Celsius.SR3.stream-apps-rabbit-maven
Yes, you can!
The HTTP URLs that we publish are nothing but a property file with key/value pairs of out-of-the-box application coordinates. You could download the file in your laptop, and use the 3rd choice from the page "Bulk import application coordinates from a property file.". Alternatively, from the same page, you could copy + paste the k/v pairs in the "Apps as Properties" text-area. These two options would allow the registration of application coordinates in SCDF's App registry.
However, at runtime, these applications will be resolved, downloaded, and deployed (by SCDF) as part of the stream/task deployments. That would mean, in a restricted environment, you may still run into the same connectivity problem.
For that reason, we have different other options in PCF to host/resolve application artifacts — see ref. docs. The SCDF App Tool is typically preferred by PCF customers.
Related
I deployed spring-cloud-dataflow-server-cloudfoundry to SAP Cloud Fondry with environments below:
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL:https://api.cf.sap.hana.ondemand.com
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG:{org}
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE:{space}
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN:{doamin}
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME:username
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD:password
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SKIP_SSL_VALIDATION:false
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES: mq
And import stream starter apps using bulk import applications.
And I create stream using "time-source-rabbit-1.3.0.RELEASE.jar" and "log-sink-rabbit-1.3.0.RELEASE.jar".
But I cannot deploy stream.
The status is "partial" fianlly, and apps' runtime are failed.
My question is:
1. Whether spring-cloud-dataflow-server-cloudfoundry can be used in SAP cloud foundry like I used?
2. When deploy stream in cloudfoundry using spring-cloud-dataflow-server-cloudfoundry dashboard, should I set any other necessary properties?
Thanks in advance.
Looking at the manifest.yml, it appears that org, space, and domain weren't replaced with SAP-CF specific values. Pay attention to the following note in the ref. guide.
Now we can configure the app. The following configuration is for Pivotal Web Services. You need to fill in {org}, {space}, {email} and {password} before running these commands.
If you have them replaced with your environment specific properties, the next step is to check the SCDF-server's logs. There will be particular details as to why the deployment failed if it did.
Now to answer your questions.
For #1, it is hard to say without logs or environment details. We don't actively test against SAP distribution of Cloud Foundry. As far as the distribution is compatible with Diego 1.7.1 and over, it should work. We also publish the CF compatible versions in project site. Perhaps this could be useful to compare the SAP CF environment and its foundation versions.
For #2, no, you don't need any other properties.
I'm transforming my SDK-based Firefox extension to WebExtensions and I've come to the issue of updating the extension. The current extension is hosted on my own domain (which is an HTTP domain), along with the update.rdf file.
Now, for SDK-based add-ons, updates were possible via HTTP as long as the update manifest was signed using the McCoy tool and the valid hash of the update file was provided in the manifest. In addition to that, install.rdf would hold the public key portion of the key used to sign the update.rdf.
There seem to be no options to do this using the web extensions ( no manifest entry for public key, and no update manifest (.json) entry for the signature).
Does this mean Firefox will only allow self-hosted extensions to update via HTTPS? How will this affect SDK-based extensions currently hosted on HTTP domains? Will they be able to receive (at least one) update?
As you appear to have determined, the update.rdf for WebExtensions based add-ons must be served over HTTPS, not HTTP. The URL for the update.rdf file must be HTTPS. The documentation for the update_url property in the manifest.json applications key is explicit on this point:
update_url is a link to an add-on update manifest. Note that the link must begin with "https". This key is for managing extension updates yourself (i.e. not through AMO).
There is no way to use the alternate security method, which is available to other types of add-ons, of providing an updateKey (and signing the update.rdf) in an install.rdf file included with the extension.
Add-on SDK based extensions, and other types of non-WebExtensions add-ons, will continue to be able to receive their update.rdf over HTTP in the same manner which they have been doing.
If your issue is transitioning an add-on from being an Add-on SDK based add-on to being a WebExtensions based add-on, then you will need to have an update to that extension which changes the URL from which updates are served. This can either be in some version before transitioning to WebExtensions, or at the same time. Either way, it is just a new version of the add-on (indicated with the update.rdf served via HTTP and appropriately signed). That new version will then have an update_url (WebExtensions) or updateURL (all other types) where the URL is using the HTTPS scheme. All subsequent update.rdf files will then be served over HTTPS.
I am developing a service layer app which provides a catalog of webservices, then I am orchestrating them using OpenESB.
I create my BPELs importing external WSDL definitions using http://localhost:8080/services/myService?wsdl.
The problem is -- these BPELs strongly depend on this specific URL, and when I deploy on production server, my ESB layer stops working.
How can I make my BPELs independent of the specific endpoint? Can I refer the URIs to an external config file?
To do it you must create application configuration and application variable and add them on your http address. Example: "http://${MyHtttpAddress}:${MyHttpPort}/service1/myService?wsdl"/>.
Applications and variable are set up in the administrative console and can be changed for each environment.
Regards
Paul
I have a windows service which downloads some files from SFTP and uploads it to database and generates PDf's from that data. So now when i should give the executable files to my client i think he need to change the app config file like sftp details and the pdf paths. So i am just thinking about a program like a windows forms or a console which reads the input and save those in app config file. Is it possible like and by the way i have created a setup project for the windows service where he gets 2 files .msi file and setup file. Is it possible to achieve the above problem in this case ?
If I understand correctly, you're wanting some kind of UI application that allows the user to configure the operation of the Windows service. This is certainly possible as I've been doing it for several years now. However, you don't want to do this via the app.config file. The app.config file is read by the Windows service when it starts up, so any changes made to it would go unnoticed until the service restarts. A better course of action would be to communicate the changes to the service via the Windows Communication Foundation (or some other ICP mechanism, e.g., pipes, sockets, shared memory, etc.). I've managed to use this successfully, although to be honest, I'm using ordinary sockets now. In any case, the service would basically "listen" for incoming configuration messages, "read" those messages, and then "configure" itself accordingly, perhaps even saving the changes in its app.config file so the changes are preserved for when the service restarts later.
HTH
I have a server / client project, both written in dart. Now my server starts on port 1337 and when I run my client with the Run in dartium, my static files are served on port 3030 which allows me to debug my client code in the Dart editor.
The problem is that this causes CORS when using AJAX calls. I have properly setup my server to accept other origins (with Access-Control-Allow-Origin) but, for example, cookies aren't sent along.
Now I'm wondering: is there a way to serve my files with my server (running on 1337) and still have the possibility to debug the client side code in the dart editor?
My understanding is that you can debug, but the real problem is that you don't get the expected data back from the server due to missing cookies.
Standard CORS requests do not send or set any cookies by default.
In order to include cookies as a part of the request, besides setting up the server, you need to specify withCredentials property, e.g.:
HttpRequest.getString(url, withCredentials:true)...
You will also need to setup server to provide Access-Control-Allow-Credentials header.
EDIT: it seems that additional issue is that you don't want to have 2 servers, each serving different part of app.
In that case, you can configure DartEditor to launch the URL, instead of files. Go to Run > Manage Launches and add create a new Dartium or Dart2JS launch with specified URL and source directory.
Another option is to select Run > Remote Connection and attach to a running instance of browser or Dart VM.
Caveat: I haven't tried these options, so I can't tell how stable they are.