This post was migrated from Justin’s personal blog, 'Codethinked.com.' Views, opinions, and colorful expressions should be taken in context, and do not necessarily represent those of Simple Thread (and were written under the influence of dangerous levels of caffeination).

Making code run faster is fun. No no, it is more than fun, it is down right addictive. Very few things in this world are more satisfying that sitting down in front of a slow piece of code and making it run even 10% faster. Even better if you can squeeze an order-of-magnitude increase out of it. Ever sat down in front of a piece of code and sped it up 2 times? 5 times? 10 times? If you haven’t then you probably haven’t really lived. At least not as a programmer anyways.

But there has to be a limit to the madness. Sure, you don’t want to write inefficient code. You want your code to be as fast as possible without expending any extra effort. But what happens when people start making up rules like using "" instead of String.Empty? I’m not arguing one way over the other, I’m not going to tell you that "" looks more readable than String.Empty or vice versa.

Edit: And I’m not here to bust on this article either. As the author pointed out in a comment below, he is merely answering a question about differences between the two, not making up rules or recommendations. End Edit.

The author makes a few valid points about strings in switch statements (even though I’m not a big fan of switch statements, yuck!) and at the end of the article the author shows some performance numbers. String.Empty took 637ms while "" took only 319ms! Oh my! String.Empty is two times slower than ""! Two times! And then you close the browser, leave the article, and decided that never again in your life will you ever use String.Empty again. And on top of that, you will ridicule anyone who does.

Well, you know what I say to that? 1000000000. Let me reformat that, cause it might be hard to read. 1,000,000,000. Or for my European friends 1.000.000.000. Yep 1 billion iterations. That means that each String.Empty comparison took about 0.000000000637 seconds to complete, versus "" that took about 0.000000000319 seconds each. Oooooooooooh noooooooooo! That is just about enough time for light to travel one foot. Bravo.

You do realize that "foreach" versus "for" probably has about the same speed implications that this does, right? Are you going to mandate that your developers only use "for" loops? A lot of the Linq methods are also much slower than their iterative counterparts, so are you going to keep doing filtering in a loop instead of just using the "Where" method? I hope that your answers to these questions are "no". Although I have actually worked somewhere that forbid the use of "foreach". No one paid attention to that rule though.

Here is my list of silly recommendations I have seen that can make code harder to read or modify:

  1. Use "for" instead of "foreach" – I love variables with tiny names… i, j, k, l…. yum.
  2. Prefer collections over using Linq – Mmmmmm… I love writing grouping, sorting, and filtering code manually.
  3. Set every variable to null after using it – If I just give the garbage collector a tiny boost…
  4. Use "String.length == 0" instead of using String.Empty or "" – Not exactly expected, but not the worst.
  5. Always use arrays instead of lists – Hmmmm, how often do I actually know the length of a collection ahead of time? Oh, maybe I’ll just make it real big!
  6. Unroll short loops – Am I the compiler?
  7. Use multiply and shift instead of division – Wow, this is just ridiculous.
  8. Pull upper-bound of "for" loop into a local variable – really? I thought that if I stopped using foreach that would be enough. Maybe I should just rewrite it in assembler.
  9. Use sealed classes – Nothing like a sealed class to really fix my app’s performance.
  10. Use non-virtual methods – It was all those virtual table lookups that really did that website in.
  11. Use StringBuilder for everything! – Who cares how long and ugly the code is?
  12. Minimize method calls – I’ve seen some people who follow this one, 5000 line methods anyone?
  13. Prefer public fields to properties – That darn method call overhead. Every millisecond counts!

I always found it funny how many people who will quote many of these rules will turn around and write some giant grotesque piece of code which uses reflection heavily. But somehow the performance of that piece is nothing to worry about! (And no, for the most part, responsibly used reflection isn’t often a problem)

So next time that you are going to start recommending that developers do silly coding practices that can really hurt the readability of your code, you should instead promote clean-coding practices and then do like my friend Simone did, and get some performance numbers so that you know where you need to tweak.

And don’t get me wrong, performance is important, and you should always be testing your performance and looking for ways to improve it. Just don’t start suffering application maintainability for performance when you don’t actually need it. If you are writing business applications then chances are that your data access is about a thousand times slower than anything your application is doing.

If you are building web applications, then you should go down this list for ideas on how to optimize your application:

  1. Caching
  2. Caching
  3. Caching
  4. Caching

That is all.

P.S. If you want to take away one thing from this post, it is that in the hierarchy of what you should care about in your code, in most cases, performance shouldn’t be at the top. That place should be reserved for readability or maintainability.

20 Comments

Jason Young

Yes, of course you’re right.

In addition to caching, I would add that you want to avoid doing really stupid things. I run into a lot of situations where something takes 1 second (an eternity in computer time), where a small, intelligent change makes that take milliseconds.

I just fixed a site that had major performance problems, and the answer was not caching. It was fixing a couple of key bottlenecks.

Reply
Simone

Jason, I agree: real application suffer, for the most part, from problems that are related to bad programming.
In the last application for which I did "performance optimization" review, I found the craziest things:
1 – Using ADODB instead of ADO.NET because the developers where used to it. So they used COM interop to talk to ADODB
2 – Do a "select *" to select all the items with a certain category id (using ADODB, of course). Loop over the results and put them "manually" in a dataset (obviously, there is no Fill method that accepts an ADODB resultset). Finally, call table[0].Rows.Count to get the number of rows retrieved. Return true or false if the number was equal to 0 or not.

And then we talk about optimizing String.Empty instead of ""… first we have to optimize the knowledge and common sense in developers… oh… wait… but all this topic is here because of lack of common sense in some developers 🙂

Reply
Christopher Harrison

Couldn’t agree more. Write good code first – worry about performance second.

Although I do think that improving web application performance only involves 2 steps:
1. Avoid round trips from client to server.
2. Stay off the database.

Reply
Sam

Justin–many developers have asked questions about string.Empty. They want to know the difference. When they ask you, you will say:

"Who cares? Do important stuff!"

I would say:

"Here are the ways they are different. This is what the CLR does. This is how the BCL is implemented. This is how they perform. This is how they can be used in switch. This is how the String class is implemented in the BCL. This is how readonly fields work. This is how switches are compiled in MSIL."

The article is not about my opinion. I am answering questions and you are ranting. –Sam

Reply
Justin Etheredge

@Sam Sorry if I came across too harsh on your article. I’ve been hearing a lot of people lately (not just your post) talking about performance on a very micro level. This blog post was all about trying to get people thinking about performance at the right level, not about bashing your post. Your post just happened to be the last thing I saw which triggered my response.

Reply
Sam

Justin, thanks for the comment. I never asserted that my post was useful. Also I don’t think the post tells anyone what they should do.

PS. Use "", it’s better 🙂

Reply
Jonathan Pryor

Here here!

Though there are other reasons for why preferring Collections over LINQ is silly, and it’s not just that manually writing filtering, grouping, and sorting code manually is painful. There’s also the time/memory tradeoff — for large datasets, you might not be able to store the entire dataset into a collection (think millions of rows in a database), while the deferred execution nature of LINQ would only pull in elements as they’re needed (and properly done, wouldn’t require that they all be in memory simultaneously).

Reply
Paco

Good post. About caching: I have seen applications that did caching, caching, caching (and another 5 times caching) and the cache became a bottleneck itself. Don’t use cache before you know that you need it. Always profile before doing any optimization.

Reply
Bart Czernicki

Couldn’t agree with you more! I always say this, you can argue better architecture and better performance etc. Bottom line is how successful was you app.

In another top post, people are arguing about code folding with regions. Seriously?

Do you really think the guy who is making 10-20k/week from his "fart" application on the iPhone wrote the most beautiful architecture using the most optimized techniques?

I love writing software, but its not an art contest.

Reply
Jason Short

I agree with most of what you say. But you do know that some of these are rooted in fact, right? In Dot Net 1.1 foreach vs for loops could be as much as 10x performance difference. Now that would make you want to make it a rule, wouldn’t it?

Almost every performance change I have made in the past 6 months have been around algorithm choice. The algorithm that works at 1000 entries fails horribly at a million. Picking the correct algorithm will be a much more satisfying experience at the end of optimization.

Reply
Bart Czernicki

@Jason,

Your not 100% right. Let’s take your for loop:

for (int i =0; i != somelist.Count; i++)
{
Dosomething();
}

How do I make that run on multiple threads? How do I parallize that? Its not that easy, I would have to change the logic of the code.

If I have a query like this…

var myPeople = from p in somelist where p.FullName.StartsWith("S");

and it takes say 30 seconds, because it is traversing over 1,000,000 records..I can easily do this:

var myPeople = (from p in somelist where p.FullName.StartsWith("S")).AsParallel(4));

and automatically scale to 4 processors if need be. I think that is a pretty big difference in writing code imperatively vs declaratively.

Yeah for loops will run faster and I have used them all the time. With .NET 4.0 coming out and more and more workstations have multiple cores to take advantage of…its time to move on to something better 🙂

Reply
Justin Etheredge

@Jason No, it wouldn’t actually. 10x performance difference in something that comprises .000002 percent of my application’s total runtime is a waste when it makes my code more error prone or harder to modify.

This is exactly what I am talking about, if using "foreach" versus "for" makes a runtime performance difference in your application, then profile, find out where your pain points are and then optimize those. In only a very very very small percentage of applications would using for versus foreach (even with a 10x performance difference) make any measurable different in application performance. In most business applications your database calls are thousands of times slower than anything else you are doing in your entire request pipeline.

Reply
Al Tenhundfeld

@Jason I agree with Justin. 10x performance improvement means nothing without a context. Improving an operation from 1/1000000th millisecond to 1/100000th millisecond is rarely worth my time.

I remember reviewing some code years ago. I looked at a method and said, "They should really be using [i]for[/i] here instead of [i]foreach[/i]." My co-reviewer said, "They should be retrieving these records from the DB as s set instead of iteratively in this loop." I felt stupid. And I think that’s some of the risk of focusing on micro-optimizations; they can make the real problems harder to see.

BTW, did you see this post, Justin?
http://codebetter.com/blogs/patricksmacchia/archive/2009/04/19/micro-optimization-tips-to-increase-performance.aspx

Reply
Justin Etheredge

@Al I think that people don’t realize where most of the time is spent in their code and therefore they waste time optimizing in places that they haven’t measured.

And actually I have seen that post! Patrick has one of those applications that benefit from these sorts of optimizations. He is in that 1% I think. In an app like NDepend all you are doing is ridiculous numbers of loops over large numbers of items. Although I bet even he spent most of his time up front optimizing the reflection parts of NDepend first, which are probably way more expensive.

Reply
Yann Schwartz

I agree with your point. There’s a anti-rule you’re citing that deserves further bashing, though :

3. Set every variable to null after using it – If I just give the garbage collector a tiny boost…

This is not only overkill, but it’s wrong. When processing a method in release mode, the jitter will actually emit code that set the variable to null automatically. So you really don’t get any micro performance advantage out of this (if it’s a local variable), you’re just adding useless clutter. You may run into ugly edge cases if your variable is actually a reference to some COM wrapper, since the refcount will be sometimes be decremented sooner that you assume.

Anyway, you’re right, most of the advice on this list are useless (in the general case). On the edge cases, well profiling would show the culprits.

Reply
Greg Young

I think we can summarize this even more by saying that algorithmic changes will benefit more than microoptimizations under most circumstances… they should certainly be the first place to look

Reply
Troy Tuttle

Bravo!

Something you may consider for a future post… Not only is optimization bad for code readability, but you are likely still guessing at what is needed. Performance is no different than any other software concept. It should be expressed in the form of a requirement. If it’s important to your customer, then your customer should be able to express how well the app should perform under load. Anything else is simply a guess, and an invitation to open-ending developer gold-plating.

Reply
bbqchickenrobot

In my opinion, this post over looks a few things. not sure intentionally or unknowingly – but I’d probably say it was intentional being this was more of a rant type post.

There are some valid points, but so does the other [sited] article. What if I’m writing for a real-time or embedded system that would have this sort of performance requirement (micro-optimization)? Is it then ok to just overlook it as my code needs to be more readable? What about the video game developers who throw in some asm code into the middle of their C/C++ routines to speed up the game? Do you think they could explain to their users that they needed readable code vs fast play and game experience? No one would buy that game again if they kept that perspective.

I think there is a time and place for everything. Large enterprise business applications and highly available/scalable web apps may need to have some readability vs performance as the teams may be comprised of several hundreds of developers. Sure, things like caching are great, but it doesn’t mean that because I can cache I should write a routine that fetches records from persistance in 10 seconds when I could do it in 1.

Also, let’s face it, var myVar = "" is hardly unreadable. If a coder cannot figure that out then he/she is probably not a coder and shouldn’t be a coder period lol.

Reply
Herbert Deem

Hi Justin. I know this is an old post and I’m not sure if you’re even following it anymore.

I wholeheartedly agree with your conclusion. I’m just writing to point out an error that nobody seems to have caught.

637ms = 0.637 seconds
637 ns = 0.000000637 seconds

Incidentally,
0.000000000637 = 0.637 nanoseconds

It doesn’t change the validity of your article, but a mistake like that could be potentially embarrassing for a programmer. 😉

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *