Attaching custom attributes to an enumeration

This post is a response to Ugly Case Statements, where Sunny tackles the problem of running separate code for each element of an enumeration. I’d like to suggest another alternative to handle this.

First, I’m not convinced yet that the dictionary of delegates pattern is a significant improvement over the switch-statement pattern. If the goal is to make the code that calls the action more readable, this could easily be done by refactoring to extract the switch statement into a method. For example, suppose we need to run different setup code depending on the type of document (e.g. invoices, packing slips, purchase orders, etc.). The extracted method might look like:

void Setup(DocumentType t)

Then calling the Setup(type) method is simpler and easier to trace than the actionMap pattern. It could even use an extension method on DocumentType to look prettier. The complexity is pushed into this Setup method instead of the InitializeActionMap method.

Taking this one step further, one annoyance with both these patterns is that the code for enumerating the possible document types and the code for defining their setup actions are in different places, so it’s easy to add a document type and forget to add its setup code. Here’s another alternative solution. We can set up the action right alongside the enum definition, like:

public enum DocumentType
{
	[Action(SetupPurchaseOrder)]
	PurchaseOrder,
	[Action(SetupSalesOrder)]
	SalesOrder,
}

Here we’re using an attribute called ActionAttribute to annotate the enumeration value. Each attribute has a delegate as a parameter; here SetupPurchaseOrder is a method we define to handle the setup.

public class ActionAttribute : System.Attribute
{
	private Action _action;
	public Action MyAction { get { return _action; } }

	public ActionAttribute(Action a)
	{
		_action = a;
	}
}

Then you have some library code that looks through the attributes for the enum to execute the command:

public static void Setup(DocumentType t)
{
	//Get the ActionAttributes attached to the DocumentType
	FieldInfo f = typeof(DocumentType).GetField(t.ToString());
	var attrs = f.GetCustomAttributes(typeof(ActionAttribute), true) as ActionAttribute[];
	
	//Execute the action
	if (attrs.Length > 0)
	{
		attrs[0].MyAction();
	}
}

This approach nicely ties together the document types and the actions required for it. If you have multiple switch statements for the same enumeration, then you can extend this pattern by associating a class with each enumeration value rather than an action. If the action needs to be decoupled from the enumeration, e.g. the user interface needs to be set up, then the SetupPurchaseOrder methods could handle this, e.g. by instantiating the interface from an Abstract Factory.

This post is a response to Ugly Case Statements, where Sunny tackles the problem of running separate code for each element of an enumeration. I’d like to suggest another alternative to handle this. First, I’m not convinced yet that the… Continue reading ‘Attaching custom attributes to an enumeration’

2 Comments

Lesson from Firefox to Chrome

I had been a Firefox fan for as long as I could remember. When it first came out, I thought its simplicity, compared to the other stuff *cough IE*, was brilliant. Firefox made browsing easy. But over time, they kept on adding more and more stuff to it. After a while, Firefox became slow and dragging.

When Chrome came out, its speed was amazing. So, like some of us, I converted. Sure, Firefox 6 is now just as fast, easy, and have some awesome features too. But if those things are just marginally better (arguably), it will be really hard for me to change my habits.

I think there are some lessons we can learn from this.

If your original intent is to make something easy and fast, then once you’ve accomplished that, you’ll likely ask, “what’s next?”

Instinctively, most of us will think “more features”: “Oh, if we add this, it’ll be cool.” “Hey, can we put this in? This guy really wants it.” “Look at that feature our competitor got. We should have it too.” … something along those lines.

But the more features you add, the more *stuff* will come along with them: more UI elements, more code paths to process, more of data to parse, more of … everything. Overtime, it can easily turn into this sluggish big monster that you originally vowed to fight against. By the time you realize it, some of your early beloved supporters are already long gone.

So every time before you want to make it “bigger and better”, ask yourself, “will this also make it simpler and easier?”

I had been a Firefox fan for as long as I could remember. When it first came out, I thought its simplicity, compared to the other stuff *cough IE*, was brilliant. Firefox made browsing easy. But over time, they kept… Continue reading ‘Lesson from Firefox to Chrome’

Leave a comment

Ugly case statements

So the other day I was going through some old C# code and encountered a giant wall of text in a function. What I mean is that I saw a giant switch-case statement that must have spanned a couple pages long. While the code works, it was ugly, disgusting, and every time I try to read it, my eyes would bleed a little bit.

I personally believe that your code should convey some sort elegance and that giant switch-case statement was anything but elegant. But what else can you do besides using a switch statement anyway?

Well the first option is to use a hashtable with a bunch of function pointers, so basically a dictionary of delegates in C#. This greatly reduces the amount of code needed to do the actual work. Of course, you might get a long initialize function to fill the dictionary somewhere else in the code, but most of the time that code will be smaller than your case statement regardless. And chances are, you only need to read the initialize code again if you ever need to add another action to the map anyways

Original Code:
switch (someEnum)
{
case Enum.SomeEnum1:
//do something
break;
case Enum.SomeEnum2:
//do something
break;
//Goes on for a very very long time
}

Option 1 (Delegates):
private Dictionary _actionMap = new { //Initialize here if you’re using .NET 3.5 or higher. Alternatively you can just call a method in your constructor }
Now in the actual function… all you have to do is something like
SomeDelegateType action = actionMap[someEnum];
action.DynamicInvoke(params);

Or if you need something more advanced, then perhaps another option is to use the command pattern.

Option 2 (Command Pattern):
ICommand command = SomeCommandFactory.CreateCommand(someEnum, params);
Command.Execute();

You can simply use a Dictionary of commands, but I’m just trying to show that there are always better ways to do it. I’m all for using design patterns (granted it has to make sense for the problem you’re trying to solve) because it increases readability and maintainability of the code. Often times, it makes your code more ‘elegant’.

So the other day I was going through some old C# code and encountered a giant wall of text in a function. What I mean is that I saw a giant switch-case statement that must have spanned a couple pages… Continue reading ‘Ugly case statements’

Leave a comment

Playing with puppet

One of the tasks that I perform as a rails application developer is repeatedly setting up the following rails stack on an EC2 instance running a variant of Ubuntu:

  • Ruby Enterprise Edition (or Ruby 1.9.2 via RVM)
  • Apache (or Nginx) with Phusion Passenger
  • MySQL
  • Postfix

In the past few years, often time I find myself doing the similar setup for testing, staging, and production environments. One thing to note here is that, I was utilizing Amazon Web Service (AWS). This means after an instance has been setup, I can take a snap shot of that instance and save the AMI to S3. Later on, I can use that image to launch new EC2 instance when needed. This avoids the hassle of rebuilding an image from scratch. But, I want to have control of what gets installed on each instances. For example, If I wanted to launch an instance just for a database server, then it does not make sense to install Postfix or Nginx on it. So, I set out to find tools that I can use to setup a Rails stack on a production environment easy as walk in the park. After doing some googling, I came across two tools: Chef and Puppet. Both are tools for automated infrastructure management. I am not going to discuss the differences between the two in this post. After reading this post (http://bitfieldconsulting.com/puppet-vs-chef), I decided to go with Puppet.

First thing I needed was a VM on my development machine. This way, I can change the puppet code and test it on my local machine without ever touching production or test environment. VirtualBox (open source virtualization software) is a great tool for setting up a VM on my development machine. I installed an Ubuntu Maverick (10.1) on my VM.

Puppet is usually setup as a server-client architecture. You store your puppet code in the server and the clients will download the puppet code from it and run them locally. You can also run the puppet code in server-less mode by manually invoking the code from the terminal. Simply put, when the code is run, puppet checks the system architecture with what the puppet code describes. If what is described by puppet is not setup properly, then it will make the necessary changes on the client to synch with what the puppet code describes.

I created bunch of puppet modules (collection of resources, classes, files, definitions and templates) for the different components of the Rails stack. See http://docs.puppetlabs.com/guides/modules.html for more information on puppet modules. I created modules for setting up a deploy user, setting up mysql, setting up nginx, etc. These modules basically tell puppet what packages to install and what configuration files to setup. You can create default configuration files (i.e. nginx.conf, mysql.cnf ) and have puppet install that to the proper locations. With these modules in hand, I played around on my VM.

My initial impression with puppet is that, there is a steep learning curve. There is so much to learn about Puppet but you are not alone on this. PuppetLabs (http://www.puppetlabs.com/) provides tones of resources for you to get going. Once you start getting the hang of writing puppet code, its a great tool that can help to automate the setup of rails (or any IT) environments.

One of the tasks that I perform as a rails application developer is repeatedly setting up the following rails stack on an EC2 instance running a variant of Ubuntu: Ruby Enterprise Edition (or Ruby 1.9.2 via RVM) Apache (or Nginx)… Continue reading ‘Playing with puppet’

4 Comments

10 reasons you should use RubyMine for Ruby on Rails Development

RubyMine is a great integrated development environment (IDE) for Ruby on Rails development.  Here’s ten reasons you should consider using it:

  1. Built-in debugger.  Using a debugger instead of print statements will increase your productivity dramatically.
  2. Built-in support for ERB, HAML, HTML, CSS, SASS, and Javascript.
  3. Integrates with Git, Subversion, etc.  You can add files or view diffs right from the IDE.
  4. Cross platform.  You can use RubyMine on Mac, Linux, and Windows; so if you need to switch computers you can still have the same familiar development environment.
  5. Refactoring.  Easily rename a variable or method.  Extract a chunk of code into a method.  Extract a convoluted conditional into a local variable.  This’ll be so easy you’ll refactor more often, and thus write better code.
  6. Quickly navigate files.  You can type abbreviated or acronyms of file names, which makes opening the right file lightning fast.  For example, if you’ve got a file app/views/order_line/_line_template.html.haml, instead of typing “_line_template” into the open file dialog, you can use “lt”.  This works for directories too, so if you have multiple “_line_template” files in different directories, you can use “ol/lt” to specify the one in the order_line directory.
  7. Run Rake tasks and Rails generators.  Not a huge deal, but being able to run them from RubyMine saves you some switching back and forth.  You also save a bit of typing with autocomplete.
  8. Quickly run your unit tests with a hotkey.  If you’re working on a unit tests, there’s one hotkey to run it and show the colour-coded results right in RubyMine.  There’s no need to switch into a console and retype the test name to run it.
  9. Have your unit tests run faster.  RubyMine has built-in support for Spork, a gem that makes running your unit tests faster by pre-initializing the Rails environment.  This makes a surprisingly big difference; you’ll be able to run a test in about a second.
  10. Their development team moves fast.  They were one of the first to support Rails 3.0, and they respond quickly to bug reports, so you can reasonably expect they’ll stay up to date.

RubyMine isn’t free, but at $69 for a developer license, it’s not enough money that anybody who is serious about coding should hesitate.  They have a 30-day trial, so give it a try now.  (We’re not affiliated with RubyMine in any way other than me being a fanboy.)

RubyMine is a great integrated development environment (IDE) for Ruby on Rails development.  Here’s ten reasons you should consider using it: Built-in debugger.  Using a debugger instead of print statements will increase your productivity dramatically. Built-in support for ERB, HAML,… Continue reading ’10 reasons you should use RubyMine for Ruby on Rails Development’

5 Comments