I am trying to create an app in Heroku. I have done different things to make it smaller (now it is shrunk to one third of its original version) and still its "compiled slot size" gets to 760M. So it keeps fail to push since it is larger than 500M. I am trying to figure out if there is a service that I can pay for the space and let me to push an app larger than 500M. I cannot find which service gives me such option.
I really need to solve this problem, because this is a very basic version of my app and I hope to expand it further to a commercial app.
Your slug is very large and you can try to reduce it further (especially if this is supposed to be a simple version of your application).
You can look at which dependencies you import and leave out what you don't really need: how to do it depends on your stack (for Java Maven apps you trim the POM file, Python the requirements.txt, etc..)
Check also out slugignore in case you can leave out files and assets from your project folder
A good option is to dockerize the application: the Docker images do not have the same slug restrictions.
If all the above is not successful you might need to look at a different provider (DigitalOcean has no slug restrictions but it does not have a free plan) or a change in your architecture (split the application in different components).
Related
Can anyone suggest me the steps to apply patch (patch upgrade) on IBM Informix database. Please suggest the best practices available. If possible share me the URL or any documents.
It's a big topic. A good deal depends on how the server is currently set up — there are setups that make it hard and others that make it easier. Another major factor is your level of risk averseness. You need to make an assessment of the amount of down-time you can afford. Also, how often have you practiced recovery from backups — it probably won't be necessary, but you need to cover your bases.
I am assuming you're using Informix Dynamic Server, not Informix Standard Engine (SE). Upgrading SE is much, much simpler.
Preparation
Before you install, make sure you have a good, recent, level 0 archive of your system.
Also, make sure you know where your software is installed, and which disks and files it uses.
Route 1: Simple, but potentially risky
Make sure you have a backup copy of $INFORMIXDIR.
Take down the servers that are currently running using this $INFORMIXDIR.
Install the new version of the software over the existing software.
Restart the server
Why is this risky? At issue is what happens if anything goes wrong, and also the length of time the server(s) is (are) down. If you bring up the server and decide something is wrong and you wish to go back to the old version, you have to take servers down, reinstall the old software (copy off the backup?), and then bring the (old version of the) server back up. This takes time. This isn't often a problem, but it has happened on occasion over the last couple of decades.
Route 2: Parallel install
This is the way I do it, but I ensure that my system is set up so that this is easy to do. In particular, the file names used to identify the chunks used by the server are symlinks to the actual storage. This makes it easier to move or replace storage when necessary — you change the symlink instead of having to modify the server configuration.
Create a new directory (e.g. /opt/informix.new) and install the new version of the software in it.
Copy the configuration files from the current $INFORMIXDIR (e.g. /opt/informix) into the new one.
Ensure any other files or directories under the old $INFORMIXDIR that are needed for the new one are copied across or recreated empty.
Review the parallel setup; as best you can, make sure that when you're ready to switch, everything will work.
Take the old server down.
Move the old $INFORMIXDIR to a new name (i.e. mv /opt/informix /opt/informix.old).
Move the new $INFORMIXDIR to the working name (i.e. mv /opt/informix.new /opt/informix).
Restart the server
Why is this less risky? The primary advantage is that the old software is still on the machine and switching back to the old version is therefore simply a question of undoing the original pair of move commands. Another major advantage is that the down-time for the system is limited to the time taken to stop, switch directories, and restart the system.
What are the potential downsides? If you weren't careful enough about copying the necessary files from the old to the new system, you can find yourself missing something critical.
Note that if your chunks are not symlinks, and especially if they are cooked files stored under the old $INFORMIXDIR, you can run into problems. These are not insuperable; you just have more work to do than simply moving directories. Do not (repeat not) move or copy chunks while the server is running. They will not (necessarily) be consistent.
Variations? I usually needed multiple versions of Informix around, so I'd use sets of directories like /work3/informix/ids-12.10.FC1 and /work3/informix/ids-11.70.FC4 as the real directories. I'd then use a standard symlink name as $INFORMIXDIR, such as /opt/informix which would link to the current version-specific INFORMIXDIR under /work3/informix in this example. (Actually, there were some extra levels of complexity in my setups, but my requirements as an Informix developer were different from those of most customers.). But the key point is that instead of moving directories, I switched a symlink — rm /opt/informix; ln -s /work4/informix/ids-12.10.FC3 /opt/informix to use 12.10.FC3 instead of 12.10.FC1, for example.
Post-installation
Run a new level 0 archive.
General observations
Informix upgrades are usually seamless and smooth. If there is conversion work to do on the upgrade, the server does it automatically when the new version is brought up.
Be aware of the mechanisms for reverting to an older version of the server if that is found to be necessary.
I've done presentations and/or papers on this in years past at IIUG conferences. Check out the IIUG web site, and the IBM Informix documentation.
We outsourced the development of Blackberry 5, 6, and 7 apps. Please bare in mind that I have absolutely no knowledge of Blackberry development at all.
Development is complete, and they have sent us the source code - a collection of .cod, .csl, .cso, .debug, .jad and .rapc files.
I would at least like to review the code in terms of it's consistency and standards - somewhat a measure of the quality. Clean code is not something specific to any one platform.
I have tried to open each of these files in notepad, but with no source code found.
Please advise me on what I need before I go pay them a visit.
The files you have been given are the files that are created as part of the build of your project and the resultant executable files. There is no source included here.
In a BB OS Build, regardless of the development environment used, the Java source files will all have the suffix .java, and the assets (images etc.) will have a suffix appropriate to the asset (like .png). If you don't see these file suffixes, then you have not been given the actual source. You should be able to view the java files using Notepad, the other files will open in an appropriate application (like paint).
To get the complete source, you should just ask the full 'project' files for your development. This will be a directory with a number of sub directories. The actual names used and the structure will depend on the development tool. If your developer is using Eclipse, then the two important directories are called src for source and res (resources) for the assets. If they are using another development environment, then the directories might have different names. So you should ask them what development environment they are using too.
Two other points:
1) If you are paying for this development and wish to review the code, but are not familiar with Java, then I would recommend that you pay someone to review the code who has knowledge of BB Java. There are two reasons for this:
(a) you will not be able to form a judgement on the appropriateness of the code without some understanding of Java, and
(b) you will not understand if the correct BB Java approaches have been used.
You need to be cautious about this, because programmers will always find fault in other developer's code. The question is how significant the faults are.
2) Some developers might be wary of giving source to their client while some payment is outstanding.
Ok, please bear with my noob question here.
I'm doing the simple task of making an update to my mvc application, compiling it and then moving in onto the production server.
I just wan't to know the best way to upload the compiled files. I have a single application pool, use ftp to upload the new application files and the site points to a single directory.
If I update just one view then which
files do I upload after compiling?
Is there a way to keep the site running
while I upload new code/views?
Where can I go to find out this
information?
Generally, you can update views without needing to re-cycle your web application. You would just want to replace the old version of the file with the new version, which can be done with a simple X-Copy command.
If there are code changes, then you will need to upate the web project DLL, which requires the app to recycle. This may or may not be a huge disruption, but it does mean that users may have their session interrupted, and lose some state.
Now, the question of how you could go about doing this is a little more complex. You can write a deployment process into your build scripts, which may be the easiest approach. The trick here, though, is that if you want to only include files that have changed, this can be a little trickier using vanilla NAnt or MSBuild tasks. You may also want to look at the WebDeploy tool from the IIS team. I've not used it much myself, but it is designed specifically to deploy web projects.
You may also want to hit google for some commercial deployment tools if none of the options so far seem to work for you.
Is there a way to group a bunch of DLL's and still use them at run time (not zipped up). Sorry this question sounds terse and stupid, but I'm not sure what more to ask.
I'll explain the situation though:
We've had two standalone Windows Applications and now one of our Applications has swelled to such ungainly proportions that the other application cannot run outside of the scope of the first app. We want to maintain some of the encapsulation we had while letting the smaller program in on some of the bigger program's features.
There is no problem in running the application, other than we don't want to send out all the 20-30 DLL's that the smaller project has.
It is possible to do this by adding startup code which checks if the DLLs are present on the target system and if not then extracts them from the resources section (or simply tagged onto the end of the exe). A good example of this being done is Process Explorer - it's distributed as a single binary, but when run it extracts and installs a driver.
If you have a situation where most, or all, of those assemblies have to be kept together, then I would highly recommend just merging the code files into the same project and recompiling. This would leave you with one assembly.
Of course there are other considerations like compile time, overall size of the final dll, how often various pieces change, and whether each component is deployed without the others.
One example of a company that did this is Telerik. Their dev components are all compiled into the same assembly. This makes deployment an absolute breeze. Contrasting that is Dev Express which put just about each control into it's own assembly. Because of this just maintaining, much less deploying, a Dev Express project is not something for the faint of heart.
(I don't work for either of those companies. However, I have a lot of experience with both toolkits.)
You could store the DLLs as Resources, and use BTMemoryModule, which essentially allows you to LoadLibrary on a Stream.
That way you could compile-in the multiple DLLs straight into the EXE or into a single resource DLL.
see http://www.jasontpenny.com/blog/2009/05/01/using-dlls-stored-as-resources-in-delphi-programs/
It has been a long while since I have really worked with J2EE so please excuse my ignorance. My recent engagement in a Grails project has peaked my interest in Java once more and especially in Grails as it makes building web applications on the Java platform fun again. That being said, I would like an expert to comment on the following requirement that I have for an application built using Grails:
The Design Team (web designers) require access to the GSP pages and edit the view files (layouts, templates, etc.) without bothering the development team (software developers). This scenario can take place both during construction and after deployment into production.
The communication level between the Designers, Developers, and Business Users are not an issue here. However, about 40% of the time, the Business Units involved request changes to the front-end that have no impact on the Developers time but require the time of a Design Team member. Currently, the deployment workflow follows the Grails application through the deployment of a War file to a Tomcat server. I imagine there is a simpler way to allow the Design team to make UI changes without going through the build and deploy lifecycle.
Several of the Design Team members have had exposure to PHP in the past and at times miss the ability to just overwrite a template file to make a UI piece more functional or improve a layout template. I hope there is a similar way to accommodate such simplicity within Grails. I have been told that exploding the War file might be an option but that still requires the reload of the Tomcat hosted application.
If you believe that I looking at the desired solution the wrong way, please do chime in as I am more interested in a workable compromise for all the team members involved. Thank you.
You need to specify the following settings in Config.groovy:
grails.gsp.enable.reload=true
grails.gsp.view.dir="/path/to/gsp/views"
The 'grails.gsp.view.dir' is typically the path to your checked out SVN repo. You can then just 'svn up' everytime you want to update the views.
There is one caveat: When a GSP view is compiled it uses up permgen. Eventually you will run out and need to restart the server.
You could run a server with a version of the application via run-app in development mode. The designers can then make changes to the views and they will reload. They would need to be able to acccess the source code on the server via a share of some kind. As a plus, if you checked out the source the designers could then commit their changes from the server.
The downside is that if the reloading fails or you run out of memory (has been known to happen with lots of reloading) either a developer would need to stop and start the app or you could provide the designers with a script to run to bounce it.
You'd obviously take a performance hit by running in development mode and via run-app but it might a ok trade off in your case.
cheers
Lee
This may not be the direct answer for this question but since you seem to pay attention to designers' role in a project, you may also check my designer friendly GSP implementation which enables designers to view GSP pages even with custom tags thanks to the "tag declaration via attributes" feature.