The speed of collections and For loops in C#

Some of the .NET training I’m doing started me wondering about speeds and things. So, I wrote some testing and turned up some interesting things…

First off, I tried comparing the speed of populating and reading from generic and normal collections. I found that Generics are much faster to populate as well as read from. I’d expected the latter (no type conversion needed), but not a better speed at population. I guess this is because the types can be checked at compile time. I tried this with both a value type (so there might be boxing/unboxing), and a reference type – each time the result was the same, non-generics took ten times as long as generics.

Populating a generic list is twice as fast if it has its capacity assigned. E.g.

List<SomeObj> myList = new List<SomeObj> ( 10000 );

Populating a non-generic list is actually slower if it has its capacity assigned. I have absolutely no idea why.

FOR Loops are slightly faster than FOREACH loops. However, the difference is piddling, so I’d actually recommend not worrying. Out of preference, I’ll use FOREACH, ‘cos it’s easier to read.

Looking at converting types (well, an integer in most of my tests) I found that:

  • AS is slightly faster than a cast
  • (cast) is much faster than System.Convert

It’s worth noting that if a conversion fails, AS will just return NULL, whereas a cast returns an exception. Raising an exception is slower than testing for null. Therefore, AS has a definite speed advantage, and hence why you shouldn’t handle expected exceptions using, um, exceptions. Instead, test something and then deal with the exception case. For example, you something like TryParse. (Actually, I should give that a whirl, see how long it takes.)E.g.

int w = 12;
Object o = w;

//fastest conversion and error handling
int x = o as int;
if( x == null) { };

//Okay speed, very slow error handling
try {
int y = (int) o;
} catch ( InvalidCastExpection e ) {}

//Don’t do this
try {
int y = System.Convert.ToInt32(o);
} catch ( InvalidCastExpection e ) {}

I’ll get back to you all about the TryParse thing.

The speed of collections and For loops in C#

Developer Day 4

So, I went to Developer Day 4, and it was very good. I’m now looking forward to WebDD. So, what of the talks at this one…

I went to Ben Lamb’s “How to write crap code in C#”. It was pretty simple, but showed just what you can do to compromise performance. Actually, the biggest message I got from it was that it’s worth testing some of the standard ‘performance tips’ – which was funny as I did that just last week.

The other notable talk was “Securing ASP .NET Websites” by Barry Doran. Apart from it being nice to listen to someone with a proper accent, it was a good high level view of the decisions that you have to make when building a website like that. Some of it was new, some of it was old hat, and it was nice to see the reasoning too. He’s a characterful speaker too.

Also, the talk “Securing Web Services using WS-*” by Chris Seary was a good ‘un – finally, I have an answer to the question “Why Bother? Why not SSL or IPSec”. Nice to have a bit of a higher level view explained

In addition, I went to one about “Using and Abusing Reflection” – which seemed a bit too specialised to be of use generally – and making fun of the Irish isn’t a great laugh. Our HR manager would have me warned if I ever did something like that – and quite right too.

Finally, there was the “Technet Highlights” talk, which was great fun, but pretty content free. It did say it wouldn’t be techy. I guess I’d just wanted to hear more of what the buzz was in Barcelona, what things are hot and what’s not (and what the stylish developer will be coding in this season). Still, they were generous with the swag – I’m not sure who they mugged to get all that.

The conclusion – I’ll be going to the next one (unless I’m promoted into management and never touch code again (Not likely))

Comments from my old blog:

Thanks for swelling my ego; I’m glad you enjoyed it and found it useful.

By Barry Dorrans at 21:09:42 Monday 4th December 2006

Developer Day 4

What the hell are the System.Drawing.Color predefined Colors?

The .NET framework has a number of predefined colours in the System.Drawing.Color class. You ‘d think this would be easy to iterate over – after all, there’s quite a lot of them. I can see that that would be useful for, say, drawing palettes.

Well it ain’t easy. They’re not an enumeration, so you can’t iterate over them. Instead, to get a list of the colours, you’ve got to do something like:

List<Color> colorList = new List<Color>();

Array colorsArray = Enum.GetValues(typeof(KnownColor));
KnownColor[] allKnownColors = new KnownColor[colorsArray.Length];
Array.Copy(colorsArray, allKnownColors, colorsArray.Length);

foreach (KnownColor c in allKnownColors) {
Color col = Color.FromKnownColor(c);
if(( col.IsSystemColor == false) && (col.A > 0)) {
colorList.Add(col);
}
}
That’s a lot of work for something obvious like iterating over colours!

What the hell are the System.Drawing.Color predefined Colors?

Two interesting articles about analysis

Two fascinating articles. I agree with Marcus Ranum, it is interesting how in Feynman’s “Personal observations on the reliability of the shuttle” highlights the good quality of the software. Given how software is usually so bad, how did this happen? Well, it seems that NASA recognised that testing is expensive and hard, and that ‘have you tried rebooting’ is not an adequate answer to a problem. Good testing takes time, money and aggressive pursuit of something better than ‘good enough’.

Those boys at NASA clearly understand software testing. After all, it ain’t rocket science.

Two interesting articles about analysis

Software Time Estimates

Control is a closed loop process.

You have some input, something happens, you look at the output and adjust your input again. This is implicit in quality procedures everywhere. This is what testing is.

Over the last few days I’ve been asked to estimate how long various bits of work for a potential customer will take. I’ve tried to make good estimates, but the truth is that I have never seen a comparison of how long was estimated at the beginning of a project, and how long was actually taken.

In other words, I still only have my gut feeling, my perceptions, to guide me as to how accurate I think my estimates are, despite the fact that this is a clearly measurable metric. The control loop is still open.

At the moment I provide estimates like “I think it’ll take about 10 days, but I’m not really very sure”. Wouldn’t it be better to know “Andy normally under estimates by and average of 10%, with a standard deviation of 10%”. Then if, for example, I estimated 10 days, we’d know that it’d actually 10-12 days, but only 68% of the time, or 9-13 days 96% of the time.

Perhaps this exercise is performed at a management level already – but as the developers are being the ones asked to estimate for technical details, it’s us that need our accuracy fed back for those aspects.

Now, I realise that there’s always going to be a human aspect to providing an estimate, that projects often aren’t that clear and easy to cut up, and that often we’re working with new things that we simply aren’t sure about. There’s also a cost-benefit question as to the effort in feeding back accuracy data. But I think that there are broad trends we should be able pull out.

I guess what I’m thinking is that we should have a process for optimising our estimates. I guess that this would involve examining statistical methodology. The best we’ve got at the moment is an informal “well that look longer/less time than I thought” after we’ve completed some work – and over projects that can span years, you just lose track.

It does strike me that it should be straight-forward to collate this data, and then feed it back at the end of the project. Close the loop. Regain some control.

Software Time Estimates

Logging, the .NET Framework, and why I used Log4NET

So recently I’ve had a need to do quite a lot of logging. Moreover, I needed the logging I was doing to be very flexible. I’m kind of new to .NET, and so I found myself learning about the ‘Debugging’ and ‘Tracing’ in it. (Incidently, WTF – they couldn’t call it ‘Logging’? Presumably, this is ‘cos Java has a ‘Logging’ api).

Long story short, .NET’s built in logging wasn’t bad, but the Log4NET project proved a lot better. Continue reading “Logging, the .NET Framework, and why I used Log4NET”

Logging, the .NET Framework, and why I used Log4NET