The answer is simple… only when the profiler tells you to.

Optimizations often make code less reliable and often constrain implementations making them less flexible. Good software performance is not created by making lots of micro optimizations throughout your code. A good design from the architectural point of view is the critical key to success for performance.

Why would I say this? Why are optimizations bad? Allow me to demonstrate a few types of optimizations that should only be done after careful consideration and profiling:

  1. Lazy Loading: I hate the term as it’s not a lazy approach. Go with the lazy approach and don’t do this until it’s a problem. Writing a property to have strange side-effects on an object is just a bad idea to begin with. If the performance of loading a collection of data is so bad that you must avoid it, then use a method that does not cache the result. Make it clear to callers that the data is not cached but loading with each call. For instance if you want a collection of Students from a Teacher object call the method “FetchStudents()” rather than “GetStudents()”.
  2. Caching: WTF, so many people think this is some kind of great idea. Caching is evil it over complicates code and causes strange side-effects all over your code. Just to be clear I’m not talking about caching of computational results in a data store, I’m talking about caching results of data store requests. Don’t ever do it… ever. A noteworthy exception to the rule is dealing with read-only or write-once data.
  3. Micro Optimizations: This is hard to define but let’s review a few examples on stack overflow. See “Array more efficient than dictionary“, “Are doubles faster than floats“, or “Is if-else faster than switch“. These kinds of optimizations are just silly. Write readable, maintainable, testable code first, then analyse the code with a profile if you have a problem. I loved John Skeet’s response to the first of these questions entitled “Just how spiky is your traffic“.

 

Enough about what not to do… what should you do? Always be aware of cost of the algorithm you are choosing. Choosing the right data structure and/or algorithm at the onset of coding can result in highly predictable and scalable software. Inversely failing to do so can cause non-linear performance as the volume of data increases eventually yielding unusable software. We’ve all seen this happen in a database application with a non-indexed query. This can happen in your own C# code too so don’t be an idiot but don’t spend all your time focused on the fastest approach. Look instead for a predictable linear-growth solution that fits your needs.

Remember there are three competing interests in any piece of code. Flexibility, Reliability, and Performance. Any two of these can be achieved at a reasonable level at the sacrifice of the other, or you can excel at any one at the cost of the other two. The choice is yours to make and will change from task-to-task and project-to-project. I call this the ‘Software Quality Triangle’ because of it’s similarity to the age-old project manager’s triangle (Cost, Time, or Features). By graphically representing each concept as a point of an equilateral triangle and positioning the axis of rotation at it center you can gain perspective of what you will sacrifice over what you will gain. For instance, by rotating the triangle so that Performance is pointing straight up (it’s highest level) the other two points (Flexibility and Reliability) will drop.

Comments