Membase? How does this work? - membase

When I add an IP address and make connection, does the client gets All server's available IP addresses?
Or does client need to know at least 2 IP addresses for when one of them goes down?
This is the code I've been testing with (JAVA)
List addrList = new ArrayList();
addrList.add("192.168.20.105:11211");
addrList.add("192.168.20.106:11211");
addrList.add("192.168.20.101:11211");
try {
List addr = AddrUtil.getAddresses(addrList);
mbsClnt = new MemcachedClient (new BinaryConnectionFactory() , addr);
If I've added only one IP address, and while i was doing the gets and sets operation and the server goes down.
Will the client be able to connect to other available servers?
because if I add an observer and see the available servers, i dont see any (if i add only one server in the list)
Does this mean I have to add as many IP addresses as possible to avoid connection failures?
Another question is that , I can see that when i add the IP address, I have to put in PORT number which is linked to specific vBucket. Does it make any overflow from making all the clients watching a same vbucket? If so, how am I supposed to balance the Clients to watch different vBuckets?
Sorry if My English isn't really getting to you T^T.
Any kind of advices or answers will be very appreciated! Thanks!

The issue here is that your using the memcached constructors in MemcachedClient. If you are on 2.7.x or lower you want to use the constructor that takes a list of URI's, a bucket name, and a password. That constructor will connect to a Membase/Couchbase node and get a list of all servers in the cluster. Then if you rebalance or failover nodes Spymemcached will do the right thing and connect to new nodes or drop connections to nodes leaving the cluster.
In Spymemcached 2.8.x and later we actually removed this functionality and placed it into a new project called Couchbase Client. In that project you will find only the constructor I mentioned above. This should make it more obvious for what you should do. Couchbase Client 1.0.1 currently doesn't have support for views, but that will be added in the next release. Also Couchbase Client is compatible with all versions of Membase.
One other thing. You only need to provide one URI in order to get a list of all nodes in the cluster, but it is recommended that you add as many URI's as you have servers in the cluster. The reason for this is that if the node you specify in the URI goes down you will lose connection to the cluster since you won't be able to get cluster updates. If you specify more than one URI then Spymemcached/Couchbase Client will try to connect to the next node in the list of URI's.

Related

Are there changes in Workbench connecting to Google Cloud MySql?

I have seen some similar posts, but this one is intriguing me a lot. I moved from the Pacific area (with my instance in USA Central) to Finland. I have been able to connect to the MySql database with WorkBench without any issue before (with whitelisted IP address). Since the move to northern Europe, I have not been able to connect to the DB at all, with no changes to my local setup.
I thought that it may be a time-out issue, but I have just created an instance in the Europe-North region, and the same problem exists.
I have also switched from my company's WiFi (and IP) to a mobile based IP address (I have been able to use both in the past in the Pacific area).
My only remaining (illogical) thought is that is a local firewall issue, but I am not sure and do not know how to fix that.
I am scratching my head - anyone with an answer?
The error messages looks like this (similar to other posts, but with no similar solutions!)
If you have moved locations, you likely have changed IP addresses as well. Make sure that you have authorized your network's current IP, and that your firewall is open to your instances public IP on port 3306.
You can also look into using the Cloud SQL proxy, which doesn't require authorizing an IP address.

I want to access Jira (Docker on Synology DS716+II) from LAN not only via IP_OF_SYNOLOGY:PORT but for example jira.synology.local

I am working with a Synology NAS type aDS716+II, DSM 6.1.4-15217 Update 2 on wich runs Docker with a Jira container.
So now what I want to do I'm assinged to get to work is to access Jira's webinterface with let's say jira.synology.local with synology being the servername.
I read a lot about nginx and how it's built in since DSM 6.X but I don't seem to get it to work properly at all.
I can access Jira's webinterface from another machine within the LAN via IP_OF_SYNOLGY:PORT so when setting up a reverse proxy on the server it should be pointing to LOCALHOST:PORT right? I have also tried using the actual IP instead of LOCALHOST but without success.
I can access the interface of Synology itself not only via IP_OF_SYNOLGY:PORT but also via DOMAINNAME.LOCAL if I set the domain name.
I really don't know what I'm missing and I tried everything I could think of. Does someone has experience with this?
If some information is missing, I'll gladly provide it. I'm fairly new to synology I have to admit. Thanks in advance!
So this has gotten zero response but I figured probably someone will have a similar "problem" in the future, so I will answer anyway.
I solved everything, when I setup Active Directory. When installing AD, the DNS-Server will automatically be installed too.
So we have JIRA running in a Docker container (on port, let's say, 12345) and I want to access it via the LAN on jira.domainname.
To do so we need to have installed DSM6.X or higher (for nginx) and the DNS-Server. That's it.
In the DNS-Server you will have to create a new master zone
and apply the following settings, whereas you can freely choose the domain name and Master DNS server must be the IP of your synology station, since it functions as a DNS
Then you want to edit the Resource Record
There you want to add an A Record Resource
and an CNAME Record Resource
So your Resource Records will look like this
Now the last step for setting up the DNS server is to tell it what to do if there is no specific record for a query. So for example if you want to open jira.domainname in your browser, there is a specific record for that and the DNS server knows how to direct it. But if you want to open up for example google.com the DNS server has no information on that and does now know what to do. So what we do now is to to tell the DNS server to forward the request, if it has no records for a request. To do so, enable the forwarders and put in the IP of your gateway/ managed switch as primary and some public DNS server (8.8.8.8 for one of google's DNS server) as secondary.
Please remember that jira.domainname shall always be the domainname you choose and 192.168.0.200 shall always be the IP of your synology station.
So now the DNS server is completely setup. Now we want to take advantage of the built-in reverse proxy (which runs on nginx in the background). To do so we navigate as seen here
and create a new reverse proxy rule
So now that the URL's can point to the same destination (your synology, 192.168.0.200) but on different Port. That comes in very handy for some applications running in docker.
So now if you are running this in an home setup or small office, you probably are working with standard issue commercial router such as for example a FritzBox by AVM. Those are pretty good but beware that some prohibit the so called DNS Rebinding which means that DNS requests pointing to a local IP will be not allowed. Since in this setup the DNS server (your synology) and the destination JIRA (also your synology) are in the same LAN, we have to create an exception. Probably other routers don't suppress those requests, but if so exceptions are necessary.
So the next step, it to tell your Gateway or managed switch that it has to use the newly setup DNS server as the primary DNS server. For FritzBox' you can do so here
put in the IP of your DNS server and an secondary DNS server. This is important as a fallback solution if your DNS server probably stops working at some point.
Now that everything is setup I would recommend to restart the router/ managed switch, synology and the workstation you are working on, to flush all caches. After that you can simply open your browser and type in jira.domainname and JIRA should open up. You can also open a terminal/ cmd and type in nslookup jira.domainname to see if it is being resolved correctly.
I really hope this will help someone at some point and if there are any additional questions, please feel free to comment this or write me directly!

How does the URL I type in lead to the eventual content I see in my browser?

I'm trying to figure out how these all work together, and there are bits and pieces of information all over the internet.
Here's what I (think) I know:
1) When you enter a url into your browser that gets looked up in a domain name server (DNS), and you are sent an IP address.
2) Your computer then follows this IP address to a server somewhere.
3) On the server there are nameservers, which direct you to the specific content you want within the server. -> This step is unclear to me.
4) With this information, your request is received and the server relays site content back to you.
Is this correct? What do I have wrong? I've done a lot of searching over the past week, and the thing I think I'm missing is the big picture explanation of how all these details tie together.
Smaller questions:
a) How does the nameserver know which site I want directions to?
b) How can a site like GoDaddy own urls? Why do I have to pay them yearly fees, and why can't I buy a url outright?
I'm looking for a cohesive explanation of how all this stuff works together. Thanks!
How contents get loaded when I put a URL in a browser ?
Well there some very well docs available on this topic each step has its own logic and algorithms attached with it, here I am giving you a walk through.
Step 1: DNS Lookup : Domain name get converted into IP address, in this process domain name from the URL is used to find IP address of the associated server machine by looking up records on multiple servers called name servers.
Step 2: Service Request : Once the IP address is known, as service request depending on protocol is created in form of packets and sent to the server machine using IP address. In case of a browser normally it will be a HTTP request; in other cases it can be something else.
Step 3: Request handling: Depending on the service request and underlying protocol, request is handled by a software program which lives normally on the server machine whose address was discovered in previous step. As per the logic programmed on the server program it will return a appropriate response in case of HTTP its called HTTP Response.
Step 4: Response handling: In this step the requesting program in your case a browser receives the response as mentioned in the previous step and renders it and display it as per defined in the protocol, in case of HTTP a HTTP body is extracted and rendered, which is written in HTML.
How does the nameserver know which site I want directions to
URL has a very well defined format, using which a browser find out a hostname/domain name which is used in turn to find out the associated IP address; there are different algorithms that name-servers runs to find out the correct server machine IP.
Find more about DNS resolution here.
How can a site like GoDaddy own urls? Why do I have to pay them yearly fees, and why can't I buy a url outright?
Domain name are resources which needed management and regulation which is done ICANN they have something called registries from which registrar(like GoDaddy) get domains and book them for you; the cost you pay is split up between ICANN and registrar.
Registrar does a lot of work for you, eg setup name server provide hosting etc.
Technically you can create you own domain name but it won't be free off course because you will need to create a name server, need to replicate it other servers and that way you can have whatever name you want (has too be unique); a simple way to do that is by editing your local hosts files in linux it is located at /etc/hosts and in windows it is located at C:\Windows\System32\drivers\etc\hosts but its no good on internet, since it won't be accepted by other servers.
(Precise and detailed description of this process would probably take too much space and time to write, I am sure you can google it somewhere). So, although very simplified, you have pretty good picture of what is going on, but some clarifications are needed (again, I will be somewhat imprecise) :
Step 2: Your computer does follow the IP address received in step 1, but the request set to that IP address usually contains one important piece of information called 'Host header', that is the actual name as you typed in your browser.
Step 3: There is no nameserver involved here, the software(/hardware) is usually called 'webserver' (for example Apache, IIS, nginx etc...). One webserver can serve one or many different sites. In case there are more than one, webserver will use the 'Host header' to direct you to the specific content you want.
ICAAN 'owns' the domain names, and the registration process involves technical and administrative effort, so you pay registrars to handle that.

ELB not routing traffic to healthy instance

This seems to have something to do with the subnet/availability zone, but I'm new to using a VPC and it's eluding me.
VPC: 10.80.0.0/16
subnet: 10.80.1.0/24 (us-east-1b)
subnet: 10.80.2.0/24 (us-east-1a)
All instances are Windows Server 2012.
I have an internet facing ELB created within my VPC (10.80.0.0/16). There is one instance added from AZ us-east-1a, which is on subnet 10.80.2.0/24. The instance is running IIS 7.5, with an app running on port 80 and /health.aspx set up for use as the ELB health check.
Internal traffic on the VPC is flowing normally (unrestricted). I can request health.aspx from this instance from another instance in us-east-1b (10.80.1.0/24). I can also copy files from one instance to another.
Outbound traffic is unrestricted. I can RDP to the instance (when connected to our VPN) and open a browser and request a web page and get it.
The ELB says the instance is healthy and I can see the requests to health.aspx in the IIS logs. Both the ELB and the instance are configured with a security group that allows 80 and 443.
But if I try to request {elb-url}/health.aspx over the open internet the request just times out. Similarly, with an elastic IP associated to the instance, a request to {elastic-ip}/health.aspx times out.
#Chris, thanks for the response...as it happens, I've already worked it out with some help from a friend. I'll post my findings here for posterity (in case anybody else was similarly confused about how ELB works).
This would be more clear with a diagram. But the summary is that in each availability zone, you need to create both a public and a private subnet. When you add availability zones to your ELB, you need to select the public subnet for the zone. This had already been done in us-east-1b before I got to this setup, and I had simply missed this nuance of ELB configuration. So for the new availability zone, I had to do this...
us-east-1c
private subnet 10.1.3.0/24 (using nat instance as default route)
public subnet 10.1.4.0/24 (using internet gateway as default route)
Then my instance goes in the private subnet as expected.
And the lynch pin of this whole thing is (drum roll....)
When I add us-east-1c to my ELB, I have to select the public subnet...10.1.4.0. Otherwise the instances will pass the health check (since the ELB can communicate with any instance within my entire VPC) but the responses from the servers cannot make it back out to the public internet.
This is what is so confusing. And I still don't fully understand it. The instance can make a request for, say, www.google.com. I can RDP to it and open a browser and get the web page. But a request from a host (like my laptop at my house) will die. strange.
PS: another note...make sure you are using enough NAT instance for your load. I think we ran into an issue where our NAT instance simply ran out of ports because too many web servers were trying to route outbound connections to 3rd party APIs through it. Quite honestly, I'm not good enough at this level of network/OS troubleshooting to be sure. But my theory is that our 8 instances of IIS were holding too many connections open to the NAT instance. We were also abusing the NIC on that micro instance. I upped us to two large instances, one per AZ and things smoothed back out. Both NAT instances are humming and we're not seeing the hung processes in IIS anymore.
Debugging this kind of issue is always a challenge. I have a few ideas to suggest based on what you have written (and generally apply to trying to solve this problem) that come from dealing with this a number of times.
Have you checked both the security groups and network ACLs? Bear in mind that all network ACLs need to be specified in both directions, as they are stateless. Also bear in mind that ELBs are a bit unique in this regard. While they are associated with your VPC, they sometimes need extra rules to ensure connectivity. In the past I have debugged this by opening all network ACLs on all ports, then removing these rules until it has stopped working in order to identify where the block was.
Security groups should be checked too. They are stateful but ensure that your load balancer has permissions to be hit from the web.
Have you checked this isn't an application configuration problem? I don't know how IIS comes out of the box but I would check it is setup to respond to all hostnames.
Check the ELB isn't an internal one, as that wouldn't be publically addressable.
You say the ELB is configured with the health check, but it's worth checking you also have the listener setup for port 80? It's in a separate tab on the dashboard and you will need this in addition to the health check for connectivity through the ELB.
Hope one of these tips is useful to you.

How can I update a DataSnap server while clients are still connected?

We use stateful DataSnap servers for some business logic tasks and also to provide clientdataset data.
If we have to update the server to modify a business rule, we copy the new version into a new empty folder and register it (depending on the Delphi version, just by launching or by running the TRegSvr utility).
We can do this even while the old server instance is running. However, after registering the new version, all new client connections will still use the currently running (old) server instance. All clients have to disconnect first, then the new server will be used for the next clients.
Is there a way to direct all new client connections to the new server, immediately after registering?
(I know that new or changed method signatures will also require a change and restart of the clients but this question is about internal modifications which do not affect the interface)
We are using Socket connections, and all clients share the same server application (only one application window is open). In the early days we have used a different configuration of the remote datamodule which resulted in one app window per client. Maybe this could be a solution? (because every new client will launch the currently registered executable)
Update: does Delphi XE offer some support for 'hot deployment' (of updated servers)? We use Delphi 2009 at the moment but would upgrade to XE if it offers easier implementation of 'hot deployment'.
you could separate your appserver into 2 new servers, one being a simple proxy object redirecting all methods (and optionally containing state info if any) to the second one actually implementing your business logic. you also need to implement "silent reconnect" feature within your proxy server in order not to disturb connected clients if you decide to replace business appserver any time you want. never did such design myself before but hope the idea is clear
Have you tried renaming the current server and placing the new in the same location with the correct name (versus changing the registry location). I have done this for COM libraries before with success. I am not sure if it would apply to remote launch rules through as it may look for an existing instance to attach to instead of a completely fresh server.
It may be a bit hackish but you would have the client call a method on the server indicating that a newer version is available. This would allow it to perform any necessary cleanup so it doesn't end up talking to both the existing server instance and new server instance at the same time.
There is probably not a simple answer to this question, and I suspect that you will have to modify the client. The simplest solution I can think of is to have a flag (a property or an out parameter on some commonly called method) on the server that the client checks periodically that tells the client to disconnect and reconnect (called something like ImBeingRetired).
It's also possible to write callbacks under certain circumstances for datasnap (although I've never done this). This would allow the server to inform the client that it should restart or reconnect.
The last option I can think of (that hasn't already been mentioned) would be to make the client/server stateless, so that every time the client wants something it connects, gets what it wants then disconnects.
Unfortunately none of these options are the answer you want to your question, but might give you some ideas.
(optional) set up vmware vSphere, ESX, or find a hosting service that already has one.
Store the session variables in db.
Prepare 2 web boxes with 2 distinct IP address and deploy your stuff.
Set up DNS, firewall, load balancer, or BSD vm so name "example.com" resolves to web box 1.
Deploy new version to web box 2.
Switch over to web box 2 using whatever routing method you chose.
Deploy new version to web box 1 if things look ok.
Using DNS is probably easiest, but it takes time for the mapping to propagate to the client (if the client is outside your LAN) and also two clients may see different results. Some firewalls have IP address mapping feature that you can map public IP address and internal IP address. The ideal way is to use load balancer and configure it to 50:50 and change it to 100:0 when you want to do upgrade, but it costs money. A cheaper alternative is to run software load balancer on BSD vm, but it probably requires some work.
Edit: What I meant to say is session variables, not session. You said the server is stateful. If it contains some business logic that uses session variable, it needs to get stored externally to be preserved across reconnection during switch over. Actual DataSnap session will be lost, so when you shutdown web box 1 during upgrade, the client will get "Session {some-uuid} is not found" error by web box 1, and it will reconnect to web box 2.
Also you could use 3 IP addresses (1 public and 2 private) so the client always sees 1 address , which is better method.
I have done something similar by having a specific table which held my "data version". Each time I would update the server or change a system wide global setting, I would increment this field. When a client starts it always checks this value, and will check again before any transactions/queries. If the value was ever different from when I first started, then I needed to go through my re-initialization logic, which could easily include a re-login to an updated server.
I was using IIS to publish my app servers, so the data that would change would be the path to the app server. I kept the old ones available, to respond to any existing transactions that were in play. Eventually these would be removed once I knew there were no more client connections to that version.
You could easily handle knowing what versions to keep around if you log what server the client last connected too (and therefore would know about).
For newer versions (Delphi 2010 and up), there is an interesting solution
for systems using the HTTP transport:
Implementing Failover and Load Balancing in DataSnap 2010 by Andreano Lanusse
and a related question for the TCP/IP transport:
How to direct DataSnap client connections to various DS Servers?

Resources