I'm scribbling here my recent(and ongoing) experience with performance tuning a java web application. Like any other article on optimization, I'll start with the quote, "Premature optimization is the root of all evil.". My take is that don't *optimize* prematurely, but do not make it an excuse for *not measuring* the performance of your system. It is good to know how your system performs at all times.
Anyway, here is the common way(was reinforced after attending a talk on performance tuning in devcamp-2010) that everyone knows about but I'm just putting here anyway...
1. Pick conservative hardware and install your application on it.
2. Create a large sample(but realistic) dataset. It might be good to just take a dump from production and load your system with it.
3. Put your system to extreme load. In general the load will be very general doing all the functions on your system. But, in the special case where you're just exploring one feature's performance/GC-activity then put the load on that feature only. Use any of the load generation tools... Apache Bench(non-gui) and Apache Jmeter(gui as well as non-gui modes) are two free ones.
4. Measure, Measure and Measure.
This gets tricky, I've tried to use hprof for memory dumps, Runtime.freeMemory and Runtime.maxMemory methods for memory consumption analysis but ended up using a profiler(a commertial one that is standard in our org) and it really helps.
5. Make your change if any and then measure again and keep repeating.
6. Automate, as much possible, the whole process for faster feedback loops.
So far we've found following major boosters..
1. Putting indexes, we always were very careful while putting indexes as they become overhead for write/update heavy tables, but later found there are many tables where indexes are helping alot more than we imagined.
2. We noticed that alot of garbage was generated and most of it was char[] data resulting from a lot of Strings. And finally figured out that the culprit was heavy amount of logs *printed*. However, Note that log statements in the code are not that expensive but printing them is. So, for prod/perf environment set print log level to ERROR and be very careful while putting log statements in app(I mean being judicious about what level you print the log in.. DEBUG/INFO/WARN/ERROR/FATAL), anything in log.error should really be an error.
3. Guarding log.debug statements with log.isDebugEnabled.
4. Caching with LRU(policy may very well depend on particular use case)
[Update:] We realized that our background jobs are very memory/cpu hungry, so we created a separate job node to run just the background jobs. It is improving the performance of serving synchronous client requests tremendously. However, we are still working on to make background jobs more efficient in terms of memory/cpu usage.
[Update:10-12-10] We realized that we usually used bind variables in sql/hql queries when wrote them by hand but when generated dynamically, we forgot(its easy) to use bind variables. Also, we never used bind variables wherever we had IN clauses. Since we all know that it is more efficient to use bind variables to reduce parsing times in db(they get more cache hits on already parsed sqls), So we are paying this technical debt and correcting all of this now. While we're doing it, it seems it would have been a simpler exercise if all the hql and sql statements were written in one place then it would have been easier to go through all of them as right now they are all scattered in different places in the big project code-base
[Update:14-12-10] We are resorting to use bulk manipulation queries and they are giving significant improvements. To take an extreme example, it is much efficient to directly execute query, "UPDATE TABLE t SET c = 'blah'", than iterating over every data object, updating it and saving it(unfortunately, ORMs give us the habit of thinking always in terms of objects and we forget to use these simple optimizations).
Also, I find the contents on this stackoverflow thread very informative and It seems I've also made some of the mistakes mentioned there :).
[Update:10-03-11] Never leave memory requirements of an operation unbounded. If your memory requirement increases in proportion to some dataset size and you believe that dataset size is upper bounded then place some mechanism so that if the dataset size goes beyond your assumption then somehow you get the notification. In my experience, usually my assumptions were either wrong or with time the load grew and assumption became wrong.. so it is much better if you plan your code so that dataset size is also bounded by design rather than by belief :). For example if you're fetching a list of records from database, probably there will be good enough reasons to use pagination parameters.
Wednesday, September 8, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment