IBM MobileFirst SDK iOS mfpclient.plist - ios

I am creating schemas(To differentiate between development & production environment) for my iOS application which uses IBM MobileFirst platform. I need to provide different values for PROTOCOL, HOST & PORT based on the selected schema value.
For Production Schema values should be as follows:
PROTOCOL : HTTPS
HOST: PRODUCTION HOST NAME
PORT: PRODUCTION PORT
For Development Schema values should be as follows:
PROTOCOL : HTTP
HOST: DEVELOPMENT HOST NAME
PORT: DEVELOPMENT PORT
As per the IBM mobilefirst development We need to place the above mentioned values in the mfpclient.plist file.

There is no need to manually update the .plist file of the application or create different "schemas".
What you should do is as follows:
Define the server profiles of your development and production servers in the MobileFirst CLI.
From command-line, run: mfpdev server info. This will show you the current list of server profiles.
Now run mfpdev server add to another server profile. Learn how to add server profiles
Once you have server profiles for development and production, whenever you want to "switch" your application to connect to either way, you simply need to register the application with the required server.
To register with the default server: mfpdev app register
To register to a specific server profile: mfpdev app register replace-with-server-profile-name
When you register the application, this command updates the .plist file with the required properties (host, port, etc...).

Related

neo4j docker image (vps managed with plesk), cannot assign certificates for secure bolt connection with Let's encrypt certificate

I'm trying to run neo4j community on a vps via a docker image managed with plesk.
I am however having issues configuring the SSL certificate so I can connect to it securely from nodejs.
Currently, the error I'm getting is quite straightforward in node:
Neo4jError: Failed to connect to server.
Please ensure that your database is listening on the correct host and port and that you have
compatible encryption settings both on Neo4j server and driver. Note that the default encryption
setting has changed in Neo4j 4.0. Caused by: Server certificate is not trusted. If you trust the
database you are connecting to, use TRUST_CUSTOM_CA_SIGNED_CERTIFICATES and add the signing
certificate, or the server certificate, to the list of certificates trusted by this driver using
`neo4j.driver(.., { trustedCertificates:['path/to/certificate.crt']}). This is a security measure
to protect against man-in-the-middle attacks. If you are just trying Neo4j out and are not
concerned about encryption, simply disable it using `encrypted="ENCRYPTION_OFF"` in the driver
options. Socket responded with: DEPTH_ZERO_SELF_SIGNED_CERT
I've mapped the volumes as follows:
/certificates to the letsencrypt live folder for the domain db.example.com
Then I'm trying to connect to it via: bolt://db.example.com:32771
When i check via browser, the certificate being served is self-signed. I have try to add this certificate to the trusted certificates in windows but it didn't do anything at all.
Also added the path to the trusted certificates when instantiating the driver:
this._driver = neo4j.driver(process.env.Neo4jUri, token, {
encrypted: true,
trustedCertificates: ['ssl/neo4j.crt'],
});
I've also tried to copy the files within that certificate folder so that the appropriate files are named as mentioned in this article.

Using mruby, ngx_mruby and redis - Applying on current production server

I am very afraid of making some modifications on the server. Because the server is working fine with the current settings.
I will to explain: The server is an Amazon EC2 instance. In this instance I have:
ruby -v: ruby 2.2.2p95 (2015-04-13 revision 50295) [x86_64-linux]
rails -v: Rails 4.2.3
nginx -v: nginx/1.8.0
passenger -v: Phusion Passenger version 5.0.10
I have 1 Load Balancer, which has a listner:
Load Balancer Protocol: HTTPS
Load Balancer Port: 443
Instance Protocol: HTTP
Instance Port: 80
SSL Certificate: Using a certificate issued on Amazon Certificate Manager. I have the domain and all sub-domains (wildcard).
This settings allow me to:
Have the main domain to the app:
www.testname.com and testname.com to use as institutional pages (About, Price, Terms etc.);
app.testname.com for users to use the system;
Have how many subdomains I want, because EACH USER has an specific page:
user1.testname.com
user2.testname.com
user3.testname.com
etc.
All work is dynamic. The user registers on the app and has a subdomain. In this subdomain, the user can access via https://. It works fine.
Users WANT to use their own domain, off course. This part is easily resolved. I create a CNAME record in the custom domain, pointing to the our subdomain, like that:
usercustomname.com CNAME TO user1.testname.com
It works fine. BUT, the big problem is: 'https://' not working on its custom domain name, obviously. Our certificate allows the domain testname.com and its subdomains.
With Amazon Certificate Manger I can import custom certificates. And then, using the awesome rails-letsencrypt gem, I can generate Let's Encrypt certificates to the custom domain names.
But the Amazon Load Balancer, in the https listner, allow to use only 1 certificate! This is very bad, because I can to have a lot of certificates, but using only one in the whole server.
Recently, Amazon releases multiple certificates to Application Load Balancer using SNI. I can to migrate my Classic Load Balancer to the Application Load Balancer, but this not solves the problem, because the max certificates limit is 25 per Load Balancer. Is very low.
The solution I found is to create an Amazon Elasticache to run a REDIS server. And then, using ngx_mruby to get the certificate. I plan it like that:
Change the https listner like that:
Instance Protocol: HTTPS
Instance Port: 443
Remove the certificate issued in Amazon Certificate Manger
Install mruby
Install ngx_mruby
Using rails-letsencrypt gem, create 1 certificate for each institutional subdomain (app, www, empty subdomain) AND create 1 certificate for each user subdomain.
When a certificate is created, the rails-letsencrypt gem can save the certificate in REDIS.
Using ngx_mruby, listen the port 443, the certificate for the domain is picked up on redis.
Apparently, this will work. The logic seems right but I do not know in practice.
My questions are:
1) To install mruby, I will follow these steps. After install, will impact in the current ruby installation? Will I need to change the system code already developed as a mruby installation result?
2) Using REDIS will affect something in the current server? Despite the $ 12/month increase in Amazon's account, I believe that using REDIS will not influence the current server at all.
3) Do you think that what I planned to solve the Amazon certificate limit will works?
Sorry the big text. I'm not server specialist. This is the unique server I have, AND without backup. And I'm afraid to break the server with no way to fix.
Tks and I appreciate any help :)
EDIT 1
Using ngx_mruby and redis with Amazon Classic Load Balancer will not works, beacuse the listner https requires one certificate. So even if I generate the certificates and connect ngx_mruby with redis, before it, the Load Balancer will respond with the default domain certificate.
But, I found a way (it works):
All customers URL have this structure:
customer1.myapp.com
customer2.myapp.com
customer3.myapp.com
All requisitions are using https listner via Load Balancer and has no way to using multiple ssl certificates in classic load balancer. Then, I did:
Register another domain, like myapp.net
Using Amazon Route 53, I created another hosted zone and I point the domain DNS records to this hosted zone
In Amazon Route 53, I created these records:
Type A point to the instance IP
Type CNAME with name * and value myapp.net
I setup my Rails App to identify the domain myapp.net. With this, the access to customer1.myapp.com AND customer1.myapp.net calls the same resource, BUT, customer1.myapp.com uses the lister https in the load balancer and customer1.myapp.net don't.
I just save the SSL CERTIFICATE which was generated by the gem in the folder /etc/nginx/ssl/ and then, create a virtual host in NGINX. After that, FINALLY WORKS!
Now, I have to dicover HOW TO SAVE the certificate in the folder and HOW TO CREATE a virtual host in NGINX, using RAILS. The manual process is described in my another question.
You need to recompile your Nginx to support ngx_mruby, we didn't suggest using dynamic module feature now. Because there is no use case with it when people using ngx_mruby. And your ruby version in your system didn't be changed.
If you want to enable Redis as the cache for your Rails, it may influence your website. But if you only create a new ElasticCache instance, there no other side effect for you. And I think to use ElasticCache you will get better optimize than host it by yourself.
I didn't try it, but it may work. Maybe others can answer your question.

Use a trusted CA signed certifiicate on a local rails server

This might sound a little stupid, but I am trying to test out IOS device enrollment and I want to use a trusted CA(eg Verisign,Comodo) signed certificate to add to my localhost rails webrick server. I do not want to add a self signed certificate because I need to test a very particular scenario. Is there a way to do this? I know domain controller validation will fail if I try to create the CA signed certificate on a website like Comodo and I cant use a certificate I already have for my production server since its bound to that domain. Is there a way to workaround this and create a production level SSL certificate and use it for development server?
You can use your existing production certificates for your local setup, and use a local DNS server (such as BIND) to resolve the domain name to your local ip address instead of your production servers ip address.
Update:
Install BIND (or whatever DNS server software you like) on some computer on your network, let us say 192.168.100.10.
Add www.myprodserver.com to resolve to 192.168.100.100.
Now on your local machine (assume its a MacBook), go to your network settings and add 192.168.100.10 as the only DNS server.
Now run ping www.myprodserver.com and make sure it is resolving to 192.168.100.100.
This is almost equivalent (but not exactly) to using /etc/hosts file to resolve domain names to ip addresses .
(all ip addresses and domain names used above are just for example)
Also, I think you will need something better than WEBRick to handle SSL certificates. You can use nginx to offload SSL and proxy to WEBRick

websocket-rails works in development environment but not production environment

I am successfully using the websocket-rails gem in my development environment, but I am not able to use it when it is deployed to my production machine. I am using the standalone server mode with the JavaScript client:
var dispatcher = new WebSocketRails("localhost:3001/websocket");
But following the same technique in production either results in dispatcher (with no prefix) being undefined, or it being defined successfully but the browser not being able to establish a connection to the server (when using a wss:// prefix).
I wonder if this has anything to do with interference from the SSL cert.
Any ideas?
EDIT
I use the production server's address in production and not 'localhost'.
Yo should not use localhost it should point to the production server IP address like:
new WebSocketRails("IP_ADDRESS:3001/websocket");
Localhost points to the same machine.
Also look the chrome console to see if you are not getting Cross Domain exceptions

Deploying the adapter shows null in console

I am not touch with worklight for past 3 months. but lots of changes made in version. In one of my project i created http adapter its working fine before. But now i try to run that project it shows some error. I cant get help in ibm forum because they changed my account to read only.
[2013-02-15 10:20:37] Starting adapter deployment on Worklight Server
[2013-02-15 10:20:37] Deploying adapter: university
[2013-02-15 10:20:37] Server host: localhost
[2013-02-15 10:20:37] Server port: 8085
[2013-02-15 10:20:37] null
[2013-02-15 10:20:37] Adapter deployment failed
[2013-02-15 10:20:37] ERROR
it was running in version platformVersion="5.0.2". Whether i need to change the version
Make sure to review the changes made in the authentication mechanism starting Worklight 5.0.0.3; make the appropriate changes in your project and re-deploy.
IBM tech note: IBM Worklight Project Auto-upgrade to 5.0.0.3 Authentication Model
Note that you did not supply any other information about your project and environment setup.
I found the solution for this
<loadConstraints maxConcurrentConnectionsPerNode="0"/>
instead Change maxConcurrentConnectionsPerNode from 0 to 2. Enable the adapter deployement.
In my case, the change to maxConcurrentConnectionsPerNode did not resolve the problem. I found that the user name and password in my adapter's data source definition are mandatory (even if the database does not have a user name and password set).

Resources