What the hell are the System.Drawing.Color predefined Colors?

The .NET framework has a number of predefined colours in the System.Drawing.Color class. You ‘d think this would be easy to iterate over – after all, there’s quite a lot of them. I can see that that would be useful for, say, drawing palettes.

Well it ain’t easy. They’re not an enumeration, so you can’t iterate over them. Instead, to get a list of the colours, you’ve got to do something like:

List<Color> colorList = new List<Color>();

Array colorsArray = Enum.GetValues(typeof(KnownColor));
KnownColor[] allKnownColors = new KnownColor[colorsArray.Length];
Array.Copy(colorsArray, allKnownColors, colorsArray.Length);

foreach (KnownColor c in allKnownColors) {
Color col = Color.FromKnownColor(c);
if(( col.IsSystemColor == false) && (col.A > 0)) {
colorList.Add(col);
}
}
That’s a lot of work for something obvious like iterating over colours!

What the hell are the System.Drawing.Color predefined Colors?

Two interesting articles about analysis

Two fascinating articles. I agree with Marcus Ranum, it is interesting how in Feynman’s “Personal observations on the reliability of the shuttle” highlights the good quality of the software. Given how software is usually so bad, how did this happen? Well, it seems that NASA recognised that testing is expensive and hard, and that ‘have you tried rebooting’ is not an adequate answer to a problem. Good testing takes time, money and aggressive pursuit of something better than ‘good enough’.

Those boys at NASA clearly understand software testing. After all, it ain’t rocket science.

Two interesting articles about analysis

Software Time Estimates

Control is a closed loop process.

You have some input, something happens, you look at the output and adjust your input again. This is implicit in quality procedures everywhere. This is what testing is.

Over the last few days I’ve been asked to estimate how long various bits of work for a potential customer will take. I’ve tried to make good estimates, but the truth is that I have never seen a comparison of how long was estimated at the beginning of a project, and how long was actually taken.

In other words, I still only have my gut feeling, my perceptions, to guide me as to how accurate I think my estimates are, despite the fact that this is a clearly measurable metric. The control loop is still open.

At the moment I provide estimates like “I think it’ll take about 10 days, but I’m not really very sure”. Wouldn’t it be better to know “Andy normally under estimates by and average of 10%, with a standard deviation of 10%”. Then if, for example, I estimated 10 days, we’d know that it’d actually 10-12 days, but only 68% of the time, or 9-13 days 96% of the time.

Perhaps this exercise is performed at a management level already – but as the developers are being the ones asked to estimate for technical details, it’s us that need our accuracy fed back for those aspects.

Now, I realise that there’s always going to be a human aspect to providing an estimate, that projects often aren’t that clear and easy to cut up, and that often we’re working with new things that we simply aren’t sure about. There’s also a cost-benefit question as to the effort in feeding back accuracy data. But I think that there are broad trends we should be able pull out.

I guess what I’m thinking is that we should have a process for optimising our estimates. I guess that this would involve examining statistical methodology. The best we’ve got at the moment is an informal “well that look longer/less time than I thought” after we’ve completed some work – and over projects that can span years, you just lose track.

It does strike me that it should be straight-forward to collate this data, and then feed it back at the end of the project. Close the loop. Regain some control.

Software Time Estimates

Logging, the .NET Framework, and why I used Log4NET

So recently I’ve had a need to do quite a lot of logging. Moreover, I needed the logging I was doing to be very flexible. I’m kind of new to .NET, and so I found myself learning about the ‘Debugging’ and ‘Tracing’ in it. (Incidently, WTF – they couldn’t call it ‘Logging’? Presumably, this is ‘cos Java has a ‘Logging’ api).

Long story short, .NET’s built in logging wasn’t bad, but the Log4NET project proved a lot better. Continue reading “Logging, the .NET Framework, and why I used Log4NET”

Logging, the .NET Framework, and why I used Log4NET

Code Access Security when programing Windows Services in .Net

So, I’ve been bad about blogging for a while – busy as at work trying to learn things. Anyhoo, I’ve been writing a Windows Service using the .NET framework’s ServiceBase class, and I found something interesting when I tried to add Code Access Security (CAS) to it.

My service connects to a SharePoint 2007 service every so often, queries a List, sends a few emails, and logs some information. The main additional assemblies it uses are Microsoft.SharePoint and Log4NET for the SharePoint and Logging parts respectively.

I tried adding CAS like so:

[assembly: FileIOPermission(SecurityAction.RequestOptional,Unrestricted=true)]

I knew I’d need other permissions, and that I definitely wanted this one; my plan was that as RequestOptional would cause all other permissions not requested by RequestOptional or RequestMinimum to be denied, I would get permissions errors. I’d then work my way through my code, adding the minimum set of permissions I required.

What I got was the security exception “That assembly does not allow partially trusted callers”. This wasn’t the failed permission that I’d expected, but that one of the assemblies couldn’t be called as from a partially trusted assembly (which my assembly was, as soon as I started added CAS).

I was surprised. I didn’t think that the SharePoint or Log4NET dlls would complain about being called from a partially trusted context. At the suggestion of Dominick Baier on Google groups, I used the Lutz Reflector to look inside the assemblies, and checked for the [AllowPartiallyTrustedCallers] attribute.

Both the Log4NET and SharePoint DLLs had this attribute. So they weren’t causing the exception. Then I tried the System.ServiceProcess dll, which contains the ServiceBase class I was subclassing. Tada! It didn’t allow partially trusted callers. Thus, when my code was run, naturally, it made calls of it’s parent class. That parent class existed in an assembly that I couldn’t access from a partially trusted context.

I guess that makes sense – I mean, quite when would you want something to interact with your services from a partially trusted position? They’re a bit, well, important for that.

Guess I won’t be applying CAS that way. Probably don’t have to, then. If my code can only be called from a Full Trust context, why would an attacker have to abuse my code? Their app would be fully trusted too – it could abuse the machine directly.

Code Access Security when programing Windows Services in .Net