Share variables between hosts with Go - memory

I have an application write in golang,and it will load the basic data to global var,so make the application response fast,and export a http interface to update the var when user make changes to the database.
But i deploy another server,and used proxy.There comes a problem,when user send http request to the update url,it will load traffic to one of the servers.So that server update this var,but others not.
such as utils.go:
package utils
var BasicDatas map[string]*MyModel
func UpdateVar(){
// do some work
}
func PreLoadVar(){
// preload data to basicDatas
}
and the main.go
package main
import(
"codebase/utils"
)
func main(){
utils.PreLoadVar()
}
so if there anyway to share the var between multi hosts?Or any libiary can help do this work?
Nsq.io seem a good choice,but i want to seek a more simple one if there is.
thanks:)

Your global variable is a very simple form of caching, which is well known to be problematic when you are running your application on multiple servers. There are several ways to fix this problem:
As suggested by Bill Nelson: use your database as the central storage and do not cache your data in your application at all.
Externalize your cache with a simple key-value store, for example using Redis together with one of the many Redis clients in Go.
Use a distributed cache such as groupcache.
If you have too much time on your hands you could use a messaging solution like nsq to implement your own distributed cache, but I would advise against it - this is not an easy problem.

Related

Share constants between iOS client code and Parse cloud code

I have an iOS app that uses Parse as backend. There, some cloud code is executed.
Both have to share the same constants.
I can share these constants on the client side via obj-c #import, and I can share it on the cloud code side via module.exports / require.
But how can I share it between client code and cloud code? It is simply error prone to define the same constants twice.
Parse offers a config object that can be queried like a class (returning an NSDictionary in iOS). Moreover, it can be configured via the web UI at parse.com. See docs here.
Upon startup, your app can retrieve the config and cache it locally. You may choose to cache it semi-permanently (say, with NSUserDefaults) and then use local copy indefinitely. I usually opt for some fixed expiration period (like weekly, so my constants are not-quite constant). Start up logic is, if the interval between now and my last config fetch exceeds the week, fetch again and replace.

How to persist Firebase objects to disk in iOS?

It seems that Firebase iOS implementation doesn't support offline caching of the client model. What this means in practice that:
For Firebase apps requiring an authentication, you need to first authenticate and wait Firebase finish the login (check the user identity, open a socket, etc.) before you can start moving the data. This will require 1-8 seconds (usually 2-5) depending on the network conditions, at least here in Finland.
After authenticating, Firebase first downloads the initial set of data and initializes the client cache. The time to perform this depends on the size of the data you add listeners for, but it's usually quite fast.
The problem here is that if you're using Firebase to implement, for example a messaging app, you'd most likely want to show the user a previously cached version of the message threads and messages, before the actual connection with the backend server is established.
I'd assume the correct implementation for this would need to handle:
The client-side model <-> Firebase JSON mapping (I use Mantle for
this)
Persisting the client-side model to disk (manual implementation using NSKeyedArchiver, or Core Data or such?)
Synchronizing the on-disk model with the Firebase-linked model in memory, when the connection is available (manual implementation?)
Has anyone come up with a solution (own or 3rd party) to achieve 2) and 3)?
It seems Firebase has solved this problem since this question was asked. There are a lot of resources on Offline Capabilities now with Firebase, including disk persistence.
For me, turning on persistence was as simple as the following in my AppDelegate:
Firebase.defaultConfig().persistenceEnabled = true
Assuming your app has been run with an internet connection at least once, this should work well in loading the latest local copy of your data.
There is a beta version of this technology within the client for iOS described here: https://groups.google.com/forum/#!topic/firebase-talk/0evB8s5ELmw give it a go and let the group know how it goes.
Just one line required for persistence with Firebase in iOS
FIRDatabase.database().persistenceEnabled = true
Can be found here in Firebase Docs

Rails connect to Asterisk and make phone calls

Hi i have googled all day long but i can't find an answer.
I have to write a web app which talks to asterisk.
It should be able to do ClicktoCall operations.
Can you guys recommend something ?
I came across a few projects but I'm still not sure.
I just want to connect to Asterisk and do calls from the web app.
thanks
If you're a Ruby programmer the best way for you to hook into Asterisk is adhearsion. It wraps up Asterisk's AGI and Manager (MAPI) APIs for you.
Also hAve a look at SIP, asterisk, adhearson and VoIP and in particular Adam Kalsey's answer. He works for Tropo which sponsor the adhearsion project.
First you need to know, that the protocol Asterisk uses is SIP, you can learn more at the Wikipedia.
Since you want to use an rails application, you may want to use ruby as well, so there's a ruby implementation named OverSip, you can check their API and see if it fits your requirements.
If you are aiming at web calls, you'll need an WebRTC, Flash or Java applet. For WebRTC you can check sipML5 for an opensource solution.
You can also opt for an interface, that will start a call from one number to another, using your phone. When the first call is picked up the server starts ringing in the destination.
Also you could make use of cloud communications providers like twilio, tropo, etc.
Try this Google search:
rails asterisk manager interface
I saw some interesting things right off. I am not trying to be one if those Use Google type people, just didn't want to paste all the links in that I found from this Google search.
Check it out, hope it helps.
There are several ways to do this but the three easiest ones are
1. Generate a call file on the Asterisk server
These files should be written to the dir
/var/spool/asterisk/outgoing
Asterisk will then pickup the file, process and delete it.
It's pretty aggressive when doing this so it's recommended to write the file into a temporary directory and then move it to the spool dir for processing.
An tutorial of the file format is here:
https://www.voip-info.org/asterisk-auto-dial-out/
(I personally feel this is a bit "hacky", and prefer doing it with an API call)
2. Generate the call by the AMI API interface.
Use the Originate function of the AMI API to generate the call. It's pretty easy to set this up just configure the manager.conf file whitch sets up a HTTP server on port 5038 from witch you can call the API.
https://www.voip-info.org/asterisk-config-managerconf/
3. Set up the call using the ARI API
First you need to setup ari.conf, this is enough for now:
[general]
enabled = yes
pretty=yes
allowed_origins=http://ari.asterisk.org
[my_username]
type = user
read_only = no
password = my_password
password_format = plain
This is a little bit more complicated to set up, but it really isn't that hard if you just get past the technical geek-speak. Just set up two channels, setup a mixing bridge and add both channels to the bridge.
To set up a click2call you dont even need to do that...
This is the call we use (ruby):
where
#{sip_id} is your registered SIP username
#{number} is the extension that is sent to the dialplan
#{USERNAME}
#{PASSWORD} is from ari.conf
HTTParty.post("http://sipserver.com/ari/channels?endpoint=SIP/#{sip_id}&extension=#{number}&context=outgoing&priority=1&timeout=30&api_key=#{USERNAME}:#{PASSWORD}")
(Note that you need to send the variabels for the variable parameter as a separate JSON for the originate command if you need to send them)
A really useful tool to understand how this works is the swagger at
http://ari.asterisk.org. We already allowed this origin in ari.conf so it should be ready to go. Remember to open your ports in firewalls etc.
Setup your Server IP and port and the API_KEY is in this format: my_username:my_password

Best way to serve files?

I'm a novice web developer with some background in programming (mostly Python).
I'm looking for some basic advice on choosing the right technology.
I need to serve files over the internet (mp3's), but I need to implement some
control on the access:
1. Files will be accessible only for authorized users.
2. I need to keep track on how many times a file was loaded, by whom, etc.
What might be the best technology to implement this? That is, should I
learn Apache, or maybe Django? or maybe something else?
I'm looking for a 'pointer' in the right direction.
Thank!
R
If you need to track/control the downloads that suggests that the MP3 urls need to be routed through a Rails controller. Very doable. At that point you can run your checks, track your stats, and send the file back.
If it's a lot of MP3's, you would like to not have Rails do the actual sending of the MP3 data as it's a waste of it's time and ties up an instance. Look into xsendfile where Rails can send a response header indicating the file path to send and apache will intercept it and do the actual sending.
https://tn123.org/mod_xsendfile/
http://rack.rubyforge.org/doc/classes/Rack/Sendfile.html
You could use Django and Lighttpd as a web server. With Lighttpd you can use mod_secdownload, wich enables you to generate one time only urls.
More info can be found here: http://redmine.lighttpd.net/projects/1/wiki/Docs_ModSecDownload
You can check for permissions in your Django (or any other) app and then redirect the user to this disposable URL if he passed the permission check.

Using Pylons as a Web Backend

I am using Pylons for two things:
1) Serving API requests (returning JSONs describing my SQLAlchemy models)
2) Running a script 24/7 that fetches flight information from the internet (using HTTP) and pushes it into my DB (again using my models).
I am NOT using Pylons as a front end, but as a back end.
What would be the best way for my script to make HTTP request? is urllib / urllib2 my best option here?
How would I run my script constantly and not on a request serving basis? Is Celery / Cronjobs what I am looking for here?
Thanks!
Regarding your first question: yes, urllib/urllib2 is probably the best bet. It has very solid functionality for making HTTP requests to someone else.
Regarding your second question: Use your database. It's not super-scalable, but it's easy to implement a system where you have a flag in the database that is, essentially, an on-off switch for the application. Once that exists, make a page (with whatever security precautions you think prudent) that sets the flag and starts the application on a loop that continues indefinitely as long as the flag is set. A second page clears the flag, should you need to stop the HTTP requests without killing the entire server process. Instead of 'pages,' they could also be shell scripts or short standalone scripts. The important part is that you can implement this without requiring Celery or cron (although if you're already familiar with either, have at).

Resources