I'm very confused. Before I run an simple webonly cocos2d-js example, I was under the impression it generates the html5 + js files so I can host it everywhere or either run offline.
But it is not the case, to run my example locally I have to start an cocos webserver which makes me wonder: Why does it need a webserver to run offline simple html5+js code? and What Do I need to host an cocos2d-js webonly game(I use IIS)?
Your confusion comes from your assumption that just because something is HTML5 and javascript, it should be able to run locally.
That isn't always true.
There are many HTML5 features that require a webserver -- mostly for security reasons. One in particular is XMLHttpRequest which Cocos uses to load asset libraries asynchronously. For security reasons, XMLHttpRequest does not run locally unless you use a local webserver.
This is because of security reasons, to prevent malicious scripts accessing local files.
This can be bypassed in Chrome with the --allow-file-access-from-files option, but it is still not recommended.
Using a local webserver is indeed the best solution.
Related
I have used Electron for many years and like the fact that I can deliver a frontend app that has a bunch of backend services (connections to databases etc) that can be bundled in a dmg.
Electron is however a bit heavyweight and I have been looking at NeutralinoJs and Tauri to see if I can do the same. I've tried NeutralinoJs and it's certainly good for bundling a frontend app but it appears not have any mechanism for writing backend services and being written in C++, I suspect this is unlikely to happen.
Does Tauri allow you to write backend services in Rust - I can't tell from the documentation.
You need to understand how NeutralinoJS Works.
NeutralinoJS written in C++ starts a server on the specified port in neutralino.config.json
"port": 0,
port 0 means Neutralino will chose a random port and start the server on it which serves all the content inside the folder we specified in the config file:
"documentRoot": "/resources/",
After starting the server Neutralino uses the native WebView APIs to start a new window and tells the WebView to load the URL we want, in our case it will be 127.0.0.1 with the port we specified.
but since this WebView can't directly modify our storage or get information about the computer, Neutralino has some Pre-Defined APIs written in C++ to view and edit information on our computer.
since we use JavaScript, the Author of Neutralino has provided us a bridge using which we can access all the Pre-Defined APIs written in C++, and this bridge which provides us this stuff is neutralino.js.
neutralino.js handles all the communication between our WebView and the Neutralino Process.
the path of neutralino.js is also defined in the config file:
"clientLibrary": "/resources/js/neutralino.js",
So if you want to work with the "Backend", you have 2 options you can either directly add those APIs in the Neutralino's Source Code or you can use a better way to work with BackEnd in Neutralino called "Extensions".
Yes, it's a core feature. Here's their guide about Rust commands that can be called from Javascript:
https://tauri.studio/en/docs/usage/guides/command
According to their roadmap, support for C-interop languages like Go, Nim, and Python is underway.
I'm planning an app for work and venturing into potential features which I've not used before.
Essentially I need to be able to access files on a network share, read, write and delete files as well as amend the file names. As a pretty closed platform I'm not sure whether iOS is capable of such a thing and if it is, what features should I look for to begin researching?
My Google-Fu hasn't come up with anything thus far so hopefully looking for someone to point me in the right direction.
Thanks.
I know this isn't very secure, but I'd personally create an ASP.NET app on your target Windows Server, or a different Server on the domain. Create web services exposed, and make an iOS app with UIWebView. You can do RPC calls from the web service that do WMI/ADSI/File System manipulation. You can prompt for domain credentials, and do remote calls essentially is the gist.
You could expose the web app so that your app can access it from local network, or URL. If you were to access it from outside I'd suggest using some secure credentials in Windows/IIS.
Some years ago I created a "mobile-friendly" web app that allowed me to manage servers, perform RPC, and do basic Active Directory queries. Also allowed file listing and deletion/moving/copying with some creative scripting. It was essentially a ASP.NET/C# web app that loaded in a iPhone app. UIWebView in iOS was a able to load it, used AJAX and some other client side scripting that looked decent. You'd essentially have to make sure that your web app renders properly in Safari/UIWebView (which is bastardized safari).
Here's a link to a demo of what I created:
https://www.youtube.com/watch?v=czXmubijHwQ&t=12s
I ran it in a browser, but it'd run from my PSP, Android test devices, iPod Touch, Blackberry, etc.
I have an app with a Rails backend and a React frontend. I am deploying it in Docker containers: one for the app, one for postgres, and one as a data volume container. I have it working, but the app image file is huge (3Gb!) and takes a long time to build.
I'd love a way to split it up. The React app needs a bunch of Node packages, but only for development; once it's all webpack-ed the React app is essentially static files. And the Rails app doesn't need Node at all.
I don't need all the development-time tooling in the production image, but as it is, I feel like I need to have it all in the same codebase so I can (eventually) set up a CI/CD environment that can build the app and run all the tests. Is there a way to do this such that I'd have a container for the React/Node app and a container for Rails, and connect them at runtime?
I think that you may have found the answer to your question already - split the code bases.
We all have some kind of knee-jerk reflex to want to keep everything in a project in the same repo. It feels safe. Dealing with separate repos seems quite scary, but so does not moshing CSS and JS into HTML for most beginners.
I feel like I need to have it all in the same codebase so I can
(eventually) set up a CI/CD environment that can build the app and run
all the tests
Well that would be nice - however testing javascript through Ruby or automated browsers is painfully slow. You end up with a "fast" suite of unit tests and a slow "suite" of integration tests that take 15+ minutes.
So whats the alternative?
Your API and your SPA application (angular) actually do very different things.
The API takes HTTP requests and poops out JSON. It runs on a Ruby on Rails server and talks with a database and even other API's.
You would do integration tests of you API by sending HTTP requests and testing the response.
Your API should not really care if the request comes from a Fuzzle widget and renders a happy face or not. Its not the API's job.
RSpec.describe 'Pets API' do
let!(:pet) { create(:pet) }
let(:json) { JSON.parse(response.body) }
describe 'GET /pets' do
get '/pets'
expect(json["name"]).to eq pet.name
end
end
The SPA server basically just needs to serve static HTML and just enough javascript to get stuff rolling.
A docker container seems almost overkill here - you just want a nginx server behind a reverse proxy or a load balancer as you're only serving up one thing.
You should have tests written in javascript that either mock out the API server or talk to a fake API server. If you really have to you could automate a browser and let it talk to test version of the API.
Your SPA will most likely have its own JS based toolkit and build process, and most importantly - its own test suite.
Of course this highly opinionated, but think about it - both projects will benefit from having their own infrastructure and a clear focus. Especially the API part which can end up really strange if you start building it around a user interface.
You can take a look to my rails react project at github.com/yovasx2/aquacontrol
Don't forget to start and fork it
Since the Youtube API requires that you run your files on a webserver rather than locally (http://code.google.com/apis/youtube/js_api_reference.html#GettingStarted) how can you test the API without pushing it to your live production server?
I'm using Ruby on Rails hosted on Heroku.
Note: To test any of these calls, you must have your file running on a webserver, as the Flash player restricts calls between local files and the internet.
That does not sound like it needs to go to a production server. I am not familiar with Flash, but http://localhost/test.html (as opposed to file://test.html) probably works. And even if not, any old test server will do.
I use a web service to convert files. The service returns the converted file as an HTTP POST, along with identifier data. My app receives the response, updates its database and saves the file to the appropriate location.
At least that's the idea, but how do I develop and test this on a local machine? Since it isn't publicly facing, I can't provide a directive URL. What's the best way to handle this? I want to keep the process as clean as possible, and the only ideas I can come up with have seemed excessively kludgey.
Given how common REST API development is, I assume there are well-established best practices for this. Any help appreciated.
The solution will change a bit depending on which server your using.
But the generally accepted method is using the loopback address: 127.0.0.1 in place of a fully qualified domain name. Your server may need to be reconfigured to listen on this IP address, but that's usually a trivial fix.
example: http://127.0.0.1/path/to/resource.html
You can use curl or even your browser if your application has a proper frontend. There are many other similar tools to test this from a command line, and each language has a set of libraries for establishing http connections and transferring data along them.
If your machine isn't accessible to the service you are using, then your only option would really be to build a local implementation of the service that will exercise your API. A rake task that sends the POST with the file and the info would be a nice thing so you could start your rails app locally, and then kick off the task with some params to run your application through its paces.
This is the case any time you are trying to develop a system that can't connect to a required resource during development. You need to build a development harness of sorts so that you can exercise all the different types of actions the external service will call on your application.
This certainly won't be easy or straight forward, especially if your interface to this external service is complicated. Be sure to have your test cases send bad POSTs to your application so that you are sure you handle both what you expect, and what you don't.
Also make sure that you do some integration testing with the actual service before you "go-live" with the application. Hopefully you can deploy to an external server that the web service will be able to access in order to test. Amazon's EC2 hosting environment would let you set up a server very quickly, run your tests, and then shut down without much cost at all.
You have 2 options:
Set up dynamic dns and expose your app to the outside world. This only works if you have full control over your network.
Use something like webrat to fake the posts to your app. Since it's only 1 request, this seems pretty trivial.
Considering that you should be writing automated tests for this, I'd go with #2. I used to do #1 when developing facebook apps since there was far to many requests to mock them all out with webrat.
If your question is about testing, why don't you use mocks to fake the server? It's more elegant than using Webrat, and easier to deploy (you only have one app instead of an app and a test environment).
More info about mocks http://blog.floehopper.org/presentations/lrug-mock-objects-2007-07-09/
You've got some info about mocks with Rspec here http://rspec.info/documentation/mocks/