My last post reguarding “Deadlocks and other issues with Invoke / BeginInvoke” was something of a rant on the subject of WinForms and multi-threading. I thought I’d post on the useage of the solution that was presented originally on the stackoverflow article. The following defines the interface portion of the class:

    public class EventHandlerForControl<TEventArgs> where TEventArgs : EventArgs
    {
        public EventHandlerForControl(Control control, EventHandler<TEventArgs> handler);
        public EventHandlerForControl(Control control, Delegate handler);

        public void EventHandler(object sender, TEventArgs args);
    }

Using this class is incredibly easy. Let’s assume our form is creating a ‘worker’ object and wants to know when some value has changes via an event. We simply need to modify the subscription to the event.

The wrong way:

    class MyForm : Form
    {
        public MyForm()
        {
            _worker = new Worker();
            _worker.OnUpdate += this.Worker_OnUpdate;
        }
        private void Worker_OnUpdate(object sender, EventArgs e)
        {
            if (this.InvokeRequired)
                this.Invoke(new EventHandler(Worker_OnUpdate), sender, e);
            else
                ;//whatever
        }
    }

Now we just do the following with the help of EventHandlerForControl:

The right way:

    class MyForm : Form
    {
        public MyForm()
        {
            _worker = new Worker();
            _worker.OnUpdate += new EventHandlerForControl<eventargs>(this, this.Worker_OnUpdate).EventHandler;
        }
        private void Worker_OnUpdate(object sender, EventArgs e)
        {
            ;//whatever
        }
    }

Now whenever the event OnUpdate is fired it is automatically marshaled to the correct thread no special handling is needed. You can also be certain that either the call is successfully made or an exception will be thrown to the caller.

EventHandlerForActiveControl<T> – Use this class if you do not want an exception thrown when when the target control is not available. The method will not be called when the control is not active.

ForcedEventHandlerForControl<T> – Use this class if you do not want an exception thrown when when the target control is not available and you still want the method to execute.

All of these are defined here if you just want the source code.

 

Just ran into this post over on devlicio.us and thought it worth a shout-out. Much like the CommandInterpreter I recently added to the library, it allows you to remove all the parsing of arguments from your code and provides command-line help. Here is to hoping that a full-blown version eventually finds it’s way into the core runtime. Until then I’ve been very pleased so far with the version I built.

 

The answer is simple… only when the profiler tells you to.

Optimizations often make code less reliable and often constrain implementations making them less flexible. Good software performance is not created by making lots of micro optimizations throughout your code. A good design from the architectural point of view is the critical key to success for performance.

Why would I say this? Why are optimizations bad? Allow me to demonstrate a few types of optimizations that should only be done after careful consideration and profiling:

  1. Lazy Loading: I hate the term as it’s not a lazy approach. Go with the lazy approach and don’t do this until it’s a problem. Writing a property to have strange side-effects on an object is just a bad idea to begin with. If the performance of loading a collection of data is so bad that you must avoid it, then use a method that does not cache the result. Make it clear to callers that the data is not cached but loading with each call. For instance if you want a collection of Students from a Teacher object call the method “FetchStudents()” rather than “GetStudents()”.
  2. Caching: WTF, so many people think this is some kind of great idea. Caching is evil it over complicates code and causes strange side-effects all over your code. Just to be clear I’m not talking about caching of computational results in a data store, I’m talking about caching results of data store requests. Don’t ever do it… ever. A noteworthy exception to the rule is dealing with read-only or write-once data.
  3. Micro Optimizations: This is hard to define but let’s review a few examples on stack overflow. See “Array more efficient than dictionary“, “Are doubles faster than floats“, or “Is if-else faster than switch“. These kinds of optimizations are just silly. Write readable, maintainable, testable code first, then analyse the code with a profile if you have a problem. I loved John Skeet’s response to the first of these questions entitled “Just how spiky is your traffic“.

 

Enough about what not to do… what should you do? Always be aware of cost of the algorithm you are choosing. Choosing the right data structure and/or algorithm at the onset of coding can result in highly predictable and scalable software. Inversely failing to do so can cause non-linear performance as the volume of data increases eventually yielding unusable software. We’ve all seen this happen in a database application with a non-indexed query. This can happen in your own C# code too so don’t be an idiot but don’t spend all your time focused on the fastest approach. Look instead for a predictable linear-growth solution that fits your needs.

Remember there are three competing interests in any piece of code. Flexibility, Reliability, and Performance. Any two of these can be achieved at a reasonable level at the sacrifice of the other, or you can excel at any one at the cost of the other two. The choice is yours to make and will change from task-to-task and project-to-project. I call this the ‘Software Quality Triangle’ because of it’s similarity to the age-old project manager’s triangle (Cost, Time, or Features). By graphically representing each concept as a point of an equilateral triangle and positioning the axis of rotation at it center you can gain perspective of what you will sacrifice over what you will gain. For instance, by rotating the triangle so that Performance is pointing straight up (it’s highest level) the other two points (Flexibility and Reliability) will drop.