Is there a benefit to developing an iOS app against a docker instance? - ios

Our backend is containerised with docker for use with minikube, I was wondering if as an iOS developer I can take advantage of this by running the backend locally on my laptop rather than having to communicate with a staging cloud based environment which can often be flaky.
Am I misunderstanding how this technology works, or would this be a viable and useful case for docker in iOS development, speeding up request and response times and allowing more control over the state of the backend I am building against?
Thanks for any clarity on this idea

What you’re explaining is possible and is something I do in my day job regularly so as to possibility, yes you can do this.
The question of whether this raises any benefit is broad and depends on every individuals needs. If you are finding that your cloud instance is extremely slow at the moment and you don’t have capacity to improve its performance, a locally run docker instance could very well help with this.
One thing to keep in mind though is that any changes you make to a local instance/server in order to make the app work as expected will need to be reflected into your production instance as soon as your app goes live to the public otherwise you will see undesired behaviour due to the app relying on non-existent server configs.

Related

How to prevent an application from starting before its dependencies are available [duplicate]

This question already has answers here:
Docker Compose wait for container X before starting Y
(20 answers)
Closed 2 years ago.
My Dockerfile starts the application using the following command
CMD ["/home/app/start-app.sh"]
start-app.sh contains the following, it waits until RabbitMQ server is available. Is there a better way of achieving this for applications running using docker-compose, k8s
while ! nc -z "$RABBITMQ_HOSTNAME" "$SPRING_RABBITMQ_PORT"; do sleep 10; done
I guess not. Even though docker compose provides a depends_on it just starts the containers in the dependency order. It does not make sure that a specific service is available. The docker documentation explains it in this way that I can't do better:
The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
You already do what is needed if you have to wait for another service to become ready. But like they say, it is better to develop applications that can handle downtime of other services, because downtime can happen anytime. That's why netflix developed the chaos monkey. One might argue now that handling these possible states makes applications so much harder to develop. That's true, but it's just the complexity of distribution. One can either ignore it: "this will never happen" and I guarantee it will or accept that it can happen and do the right thing. Maybe this decision depends on the risk and the damage it will produce when it happens to your company.
Services or microservices give us a lot of decoupling and flexibility, but they come at a coast that we should keep in mind.

Ruby/RoR development: locally or server

Our company has started development of own systems "in-house". We already got couple of developers, who will be responsible for writing code in Ruby/RoR.
We are currently discussing about infrastructure and I would like to ask: should we develop everything on local machines, then put it to test server and later to production, or develop everything on development/test server, then publish it for testing and later to production?
Just an update to the description above: under "local machines" I meant developers' desktops and this test/development server is a machine in our office.
It's a valid question, and as such there's a trade-off to consider.
Generally; work locally. Web app development has a natural flow that leads developers to be saving and refreshing browsers many times an hour. All the time you save on network latency will actually add up, and be less frustrating for the developers.
There are downsides to working locally however, you'll need to make sure that your set-up is EXACTLY as it will be on the testing/production servers. That means everything down to your kernel version, apache version, ruby/rails version. DNS is easy, but again must mimic the live situation perfectly in order for AJAX calls etc to work seamlessly.
Even if you ensure all of the above, you will likely have to make a few minor changes when you move the app to a live server, there just always seems to be something in my experience.
Also, running on a live server isn't SO painful for a developer. Saving a source file from a text editor/IDE via FTP should take less than a second even over the internet, and refreshing a remote browser session will give your UI designers a better feel for the real user experience and flow. If you use SVN rather than FTP much the same applies.
Security isn't much of a concern, lock down FTP and SSH to the office IP, but have a backdoor available if a developer needs to edit a source from somewhere else, so they can temporarily open the firewall to their own IP.
I have developed PHP and Rails apps on a remote test server, on an in-office server and on a local machine. After many years doing each, I can say that as a developer, I don't mind any so much.
As a developer, my suggestion is that you need to 1st do all developing work on your local server. After testing, you need to send to client to make it live.
I'm working as a web developer on Ruby on Rails # andolasoft.com, we are following the same procedure. Hope you got the idea.
Thanks

What is Plan B for when heroku goes down on your production app?

This question is inspired by this recent outage:
https://status.heroku.com/incident/212
There doesn't seem to be much I can do here. I can't push at all, and pushing seemed to be what broke it in the first place. AFAIK, I can't switch over to a new server deployed on aws or elsewhere without fiddling with the DNS records. What should I do?
When you use an "all-in-one" service like Heroku you accept and understand than, in case of this kind of issue, you're in their hands and there's nothing you can do.
You can keep a backup system configured elsewhere but, from my point of view, this is a waste of time and resources because:
it requires you to configure and clone all Heroku settings and features
it's a double work
in case of issues, the only way to redirect the traffic to your app is to change DNS settings. The change requires DNS changes to propagate
if you can clone Heroku features, you might not want to use Heroku at all
It's a good idea to have an off-site backup of your application, database and features. But on the other side, these issues are the trade off of using this kind of services.
The only real thing you could do would be to not rely on a single service provider for your application. This means that you would need to break out the DNS from the hosting platform so that you can re-point to a different platform (such as AWS).
Depending on your hosting platform, there are different options, but in a nutshell, the key is to reduce single points of failure and have plans in place to switch over when things to fail.

Best way to run rails with long delays

I'm writing a Rails web service that interacts with various pieces of hardware scattered throughout the country.
When a call is made to the web service, the Rails app then attempts to contact the appropriate piece of hardware, get the needed information, and reply to the web client. The time between the client's call and the reply may be up to 10 seconds, depending upon lots of factors.
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
I basically see two options. Either run JRuby and use multithreading or else run several regular Ruby instances and hope that not many people try to use the service at a time. JRuby seems like the much better solution, but it still doesn't seem to be mainstream and have out of the box support at Heroku and EngineYard. The multiple instance solution seems like a total kludge.
1) Am I right about my two options? Is there a better one I'm missing?
2) Is there an easy deployment option for JRuby?
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
From an engineering perspective, this seems like it would be the best alternative.
Why don't you want to do it?
There's a third option: If you host your Rails app with Passenger and enable global queueing, you can do this transparently. I have some actions that take several minutes, with no issues (caveat: some browsers may time out, but that may not be a concern for you).
If you're worried about browser timeout, or you cannot control the deployment environment, you may want to process it in the background:
User requests data
You enter request into a queue
Your web service returns a "ticket" identifier to check the progress
A background process processes the jobs in the queue
The user polls back, referencing the "ticket" id
As far as hosting in JRuby, I've deployed a couple of small internal applications using the glassfish gem, but I'm not sure how much I would trust it for customer-facing apps. Just make sure you run config.threadsafe! in production.rb. I've heard good things about Trinidad, too.
You can also run the web service call in a delayed background job so that it's not hogging up a web-server and can even be run on a separate physical box. This is also a much more scaleable approach. If you make the web call using AJAX then you can ping the server every second or two to see if your results are ready, that way your client is not held in limbo while the results are being calculated and the request does not time out.

Are you using AWSDBProxy? Is there a performance hit when scaling out?

It seems that the only tutorials out there talking about using Amazon's SimpleDB in a rails site are using AWSDBProxy... Personally, I find this counter-intuitive to scaling out, considering the server layout of a typical Rails site below (using AWSDBProxy):
Plugin here: http://agilewebdevelopment.com/plugins/aws_sdb_proxy
Image here: http://www.freeimagehosting.net/uploads/91be4e0617.png
As you can see, even if we add more mongrels, we have two problems.
We have a single point of failure far less stable than our load balancer
We have to force all our information through this one WEBrick server
The solution is, of course, to add more AWSDBProxies... but why not then just use the following code in say, a class, skipping the proxy all together?
service = AwsSdb::Service.new(Logger.new(nil),
CONFIG['aws_access_key_id'],
CONFIG['aws_secret_access_key'])
service.query(domain, query)
So what I'm getting at, is if you are using AWSDBProxy, what are you justifications for it? And if you are indeed using it, what is your performance like? If you have hard numbers, this would be even more appreciated!
I'm not using it, nor have I ever heard of it, but this is what I would think are reasonable reasons.
You're running your main app server on EC2, so the chance of Internet FAIL doesn't really affect you more than once.
You run one proxy on each of your app servers. So it's connection going down is no worse than it's connection(s) to the database going down.
Because it can be done. This is as good a reason as any in an open source project. Sometimes it takes building a thing before you know whether said thing is a good/bad idea.
You don't have the traffic levels to need a load balancer. Then your diagram squashes down to a line, if not a single machine.

Resources