SharePoint UK User Group Meeting with Lawrence Liu

So, Thursday night the user group had a meeting with Lawrence Liu. I’d gone to find out about the “Fantastic 40” templates – but it wasn’t about that so much. Given that it seems that the 20 that have been released are, essentially, showcases for the designs you can make with the out-of-box functionality, and that the other 20 aren’t finished yet (although look like more interesting customisation), I was actually glad that the talk was more varied.

Lawrence highlighted some things. First, the newsgroups aren’t really being monitored as much as the forums. Ask questions there. Secondly, we’re getting a new community site for SharePoint – the world’s first Internet facing SharePoint system. There is some talk about moving the SUGUK site to it – I think that would be good, having a common community. This site isn’t open just yet, they’re working on Passport integration (yeah, I know, but other than Passport it sounds like a great idea).

The second part of the evening was fascinating – it was about the ‘pain points’ of SharePoint 2007, and what the general plan for SharePoint vNext was. The pain points started:

  1. Dev Documentation
  2. IT Pro documentation

…which exactly matched our problems so far. The top 2 were bang on, which is a good sign.

There were some interesting points up there too:

Tools – We’ll, that I’d thought of, but he mentioned “Visual Studio 2005 Extensions for WSS”, which will make creating features less of an arcane, esoteric pain in the ass. It’ll be able to take a site definition and reverse engineer it into a feature, which will be cool. There’s also the hope of more community based tools, but I’m a little worried that we’re going to end up with a scattering of different little applications. I’d prefer one tool – after all, that’s one of the benefits of Visual Studio (except when you have to use Caspol, InstallUtils, etc.). Perhaps there’s room for a ‘collection of tools project’?

End-user training materials – Um, there isn’t any, really. MS are planning to release a feature for setting up a training environment soon, and hopefully there will also be materials accompanying that. They’re planning a SCORM training module for it.

Other than that – lots of bits of information, but too much to go into (or remember all that easily).

Regarding SharePoint vNext, well, there will be a Service Pack before vNext. Initial plan is for it to be out in roughly 2 years, and it’ll be an incremental improvement, rather than a leap forwards. Some of the folks at the user group meeting seemed to want something more dramatic – but let’s face it, it takes time to learn a new version, it can take a year for projects to really roll into motion, and nobody wants to buy if they can get the next version in 6 months – I think anything less that 2 years (3 years even) is too fast a cycle. 2 years is probably okay for an ‘improvement’ version, although I agreed with some of the comments about vNextNext needing to be something more dramatic. And I’m totally with Colin Byrne’s point about ditching CAML – God awful markup that it is.

The knowledge management extensions for SharePoint look like they’re being pushed back – and I think Lawrence said that they were being pushed back to vNext. There’s two problems with it, as far as they’re concerned – it only works with English, and it only uses Outlook email as the datasource for trying to produce someone’s knowledge areas. Given that they’ve got the SharePoint server itself, possibly desktop search, etc., I saw his point. Still, it’s possibly quite exciting for larger organisations.

Areas of emphasis for vNext – Search and something. Forgotten what the other thing was. But the search team have been given a bit more flexibility to move rapidly. Something about a competitor who’s name ends in ‘oogle’.

I guess the only other impression I took away was that there is a lot of stuff coming out in the range of one to three months. Hopefully not all at once, it’d be a lot to take in.

SharePoint UK User Group Meeting with Lawrence Liu

Site Collection Usage Reports

So, I wanted to view the Site Usage reports after reading a post by Joel Oleson.

I went to SharePoint Central Administration > Operations > Usage Analysis Processing. I enabled the logging there. Following the further instructions, I then enabled logging for MOSS. This is in SharePoint Central Administration > Shared Services > Usage reporting. (If you don’t do both in MOSS, it gives you an error message saying that it needs ‘Both Windows SharePoint Services Usage logging and Office SharePoint Usage Processing’ enabled. That’s why Joel mentioned it!)

I then went to view the usage for my site collection (Site Actions > Site Collection Usage Reports).

I was prompted for my username and password 3 times, which was puzzling. It looked like the HTTP Basic authentication dialog…

… and after that I got a page saying “Service Unavailable”. Worse, when I tried going back to my site, I got the same message. Checking IIS, I found the AppPool had stopped.

Examining the event log, I’ve got a bunch of errors from the .NET Runtime saying:

.NET Runtime version 2.0.50727.42 – Fatal Execution Engine Error (7A05E2B3) (80131506)

I’ve got no idea what’s going on, and can’t find any documentation. Great

More Info: Further info – I reverted my VM and tried again. I’m now getting a “401.1 Unauthorised” response on the page, but at least the AppPool isn’t dying.

I did get given a plausible answer, though I haven’t tried it yet

Comments from my old blog:

If I remember correctly this is caused by the Reporting Services components that it uses to render the pages. Somewhere there is a dependency on the service’s user profile, perhaps the Temp folder, but regardless usually there is no profile created since the service account has never been logged on interactively to the server so that the profile could be created. This results in a crash of the w3wp process which gives you the symptoms you have experienced.

I have been fixing this by logging on one time as the service onto the server which will be serving these pages.

By A Ray at 16:59:55 Thursday 9th August 2007

Yup, that’s pretty much what the answer I was given was – log in once to create that profile.

By Andy B at 16:57:05 Monday 20th August 2007

Site Collection Usage Reports

The Week in Pictures Library webpart

Just had a look at this web part. I was looking for a way to have a page display a random image in a web part. I wanted to select from just some library. This web part sounded ideal.

Well, it’s not ideal. It seems that it will only select images from a picture library called ‘This Week in Pictures Library’. That’s right, I had to change the picture library name. I’m hoping that I’ve missed something obvious, ‘cos that seems really dumb if I’ve not. I mean, what if I want ‘Local Images’ as my library? Does that mean I can’t use this web-part?

I suppose that one possibility is that it’s matching the name of the web part with the library, so I could change them both to ‘Local Images’. I shall test and report back.

My mistake. You can specify another name for the picture library. It’s under the ‘Slideshow > Image Library Name’ option. Kind of obvious. Wonder why this didn’t work at first? Must have had a typo

Comments from my old blog:

Have you figured out how to get it to randomly pick a picture from the library? As far as I can tell, it just picks the last picture added to the library. That’s not very helpful to me.

By Jared at 18:22:15 Wednesday 7th February 2007

That’s a good point, I’m not sure. I shall check.

By Andy B at 17:43:49 Tuesday 13th February 2007

There is an issue with this WebPart. If you have two of them on the same page, and have them pointing to different picture libraries. Then you go and click slideshow on both of them, they both point to the first inital library. Is this a bug?

By Atta at 21:07:47 Wednesday 28th March 2007

I don’t know, it could well be. I’ve got to be honest, I spent a half hour looking at one once. I try and take a look, when I get a moment…

By Andy B at 10:11:41 Friday 13th April 2007

The Week in Pictures Library webpart

Nintex workflow extensions…

Angus Logan raves about Nintex workflow extensions. Design workflows over a web interface? How cool is that!

It’d be interesting to see how it works, but if it lives up to it’s promise, that’s a very cool product!

Comments from my old blog:

Product has launched and the feedback has been great. I think we have lived up to the promises.

By Brett Campbell at 16:06:35 Tuesday 24th April 2007

Nintex workflow extensions…

Services in SharePoint 2007

One of my tasks recently has been building a service for SharePoint 2007. The idea is that this service would read a set of items in a list, apply some rules, and send out a notification email to various users as a sort of ‘digest’ of things that they needed to deal with. In the end, I produced a Windows service to do this – but there clever folks like Colin Byrne and Andrew Connell who’re starting to figure out how build ‘services’ (or ‘jobs’) into SharePoint 2007 directly, and I suspect this overcomes some of the issues I found. Continue reading “Services in SharePoint 2007”

Services in SharePoint 2007

What did I learn today (and yesterday) about testing

I’ve been testing some code I’ve written for sending out notification emails according to a moderately complex set of rules. So, what have I learnt?

  • Developers make rubbish testers. We know our own code too well, and already have expectations of how things should work.
  • Testing has too many variables for any reasonable size application. You can’t test exhaustively. Equally, testing only what you think needs testing will also miss things. I’m beginning to think some automated, semi random testing (if you can) is the way forward – and you’ll still miss things.
  • Specialist testers are good. I’m convinced a specialist would see what I’ve missed, and have good ideas over how to test generally. I don’t think I’m alone in that..
  • Automated test is not a panacea… I really don’t see how it could apply in this case, at least not without a unit test program that could create database records, read a SharePoint library, and read an exchange server. All reasonable – but how do I then test the test app? ‘Cos it’s a complicated as my actual application.
  • … and Unit testing isn’t a panacea either… I know, they’re related, but this does at least imply that you’re testing, well, small units of code. That relies on problems not occuring because of mismatches in the interoperation between units.
  • … and with them both, you have to remember that you’re designing the tests. If you didn’t think of the case to handle it in your code, do you think you’ll think of it to test for it?

All that said, my application seems to work pretty well. Just there was a lot of manual cross referencing of results to check things worked, so I had a lot of time to think.

I’d really like a project where I can get thoroughly stuck into unit testing. Didn’t use it in this one, but I did think about how I would’ve done it – and I suspect I’d have missed one of the bugs I found. I suspect that unit testing’s main advantages are in forcing developers to actually think up front (sometimes rare), and in ensuring a consistent public interface despite internal changes. But I don’t see how you can get away from the fact that someone has to site and decide what to test – and that way things can be missed..

What did I learn today (and yesterday) about testing

Benchmark: Speed of Encryption and Decryption using .NET Framework classes

I was reading about security stuff in the .NET framework, and dealing with cryptographic classes in it, and it sort of set me wondering. Here are all these different encryption classes, with different block and keys sizes, cipher modes, all that jazz – but what are their performances like? Specifically, I’d read something saying how some ‘weaker’ encryption algorithms are better (in some speed-critical applications) ‘cos they’re faster. I wondered how much?

Thus, I decided to benchmark the Symmetric alogrithms in the .NET Framework – DES, Triple DES, RC2 and Rijndael. To make life interesting, I thought I’d try them with differenct key sizes and block sizes, and cipher modes.

So, I’ve linked to definitions of these factors, but for those who don’t want to read vast chunks of Wikipedia, here are my (simplified) definitions. For anyone really interested in learning how to program with encryption properly (and in learning why their 128 bit key probably isn’t 128 bits strong) I can strongly recommend the book ‘Practical Cryptography’ by Bruce Schneier and Niels Ferguson.

Symmetric ciphers are ones like you used when you were a kid. You have some operation that turns a message into garbage, and then the reverse of that operation turns that garbage into a message. Some algorithms don’t have a reverse – they are asymmetric ciphers, and are a whole different kettle of fish.

Keys are the password you use with your cipher. For example, if you’re cipher as a kid was to shift all letters in the alphabet, then the key might be the numbers of characters shifted. Big keys are harder to break. Think of it as being just like a password or PIN number. If I tell you that my PIN is 4 digits, you might be tempted to guess all 10,000 possibilities, and on average you’d figure my PIN out after 5000 tries. If my PIN was 8 digits, then there is 100,000,000 options – and you’re less likely to try all the possibilities, eh?

Block sizes. Well, okay, some ciphers work on blocks of data, rather than each byte (or each ‘letter’). These are block ciphers. There are also stream ciphers, where each byte is encrypted one by one. Anyway, in block ciphers there is a limit to how much data can be encrypted without ‘leaking’ information. Larger block sizes can encrypt more data without that leakage. (That’s not to say that the block has been decrypted, but an attacker could start to learn things about the contents of that block.)

Cipher modes don’t really have a parallel with how you did codes as a kid. I guess I would describe it that if the cipher is about how you make an apparently random set of bits, then the cipher mode is about how you then use them. There are lots of different modes, but the .NET framework classes only seem to support 3 – ECB (Electronic Cookbook), CBC (Cipher block chaining) and CFB (Cipher Feedback).

So, what are the algorithms:

  • DES – An old encryption standard, now regarded as offering poor security, but so widely used that it is still in operation as a legacy system.
  • Triple DES – An improved version of DES, made by essentially applying the DES 3 times.
  • RC2 – A moderately old encryption algorithm. Flexible key lengths, but short block size.
  • Rijndael (aka AES) – The latest encryption standard. The Rijndael algorithm was selected from several as part of a competition. It wasn’t regarded as the most secure, but it was quite quick. The Advanced Encryption Standard (AES) is actually a subset of Rijndael.

The Test

I found a nice text file – “The complete works of Shakespeare” – as my test data.

For each algorithm, for each mode, key and block size, the test program encrypted and decrypted the data twenty times, and reported the average ‘time’ for each operation. I was using the Win32 QueryPerformanceCounter function, which doesn’t really return a time so much as cycles. However, all the tests were done on the same machine, so they’ll do just fine for comparison purposes.

Results

With the several factors tested, there are many ways of slicing the data. It’s worth noting that these results are pretty rough, as the times taken also include file IO operations, and with any modern PC there’s also something else happening at any single time. Also, the times are the total time taken to encrypt and decrypt, which might not be the same for each operation. Treat the results as a loose guide.

First let’s look at the raw results. You can get the results here (Excel file) –EncryptionTimes.

Unsurprisingly, DES is fastest – given it’s age, and the low level of security it offers now. Triple DES with the longest key it supports was generally slowest. RC2 covered the full range of results, which is also unsurprising, given it’s flexibility, and Rijndael sort of falls in the middle.

The first thing I noticed was how few tests there were using DES or Triple DES. RC2 and Rijndael are much more flexible in their use.

Next, it’s interesting to note that RC2, DES and Triple DES using Cipher Feedback Mode (CFB) were all very, very slow. They all seem to suffer very badly using CFB.

So, excluding the CFB results then (as they are so exceptionally slow), what do the other results show? Well, Rijndael does not suffer so badly in CFB, although CFB is slower.

ECB appears slightly slightly faster in the table, though examining the CBC Mode Graph shows little difference.

To compare the modes, I looked at just the operations done with the Rijndael cipher.

Again, we see little difference between ECB and CBC, so I guess there’s no reason not to use the more secure CBC mode over ECB (for an example of it’s weakness see here). Also, for 128 bit blocks (as required by the AES standard), CFB is as quick as ECB.

Rijndael is not great in CFB mode with blocks of longer than 128 bits.

Okay, so let’s focus on just one mode (CBC) and look at the results shown in the CBC Mode Graph. Well, it’s interesting to note that RC2 with a 112 bit key was quite quick – faster than with some shorter keys. However, it’s only about 6.5% longer to use 128 bit Rijndael – which is a key that is 14% longer. Doubling the key with Rijndael to 256 was only 10% longer than 128.

Longer blocks take longer to encrypt and decrypt. 64 bit blocks seems a little short these days, only being safe for up to a couple of hundred megabytes. 128 bits seems more reasonable. 256 bits seems excessive. Rijndael seems to have little penalty for using 256 bits over 128, though if you do, you’re not using an AES standard encryption.

Conclusion

DES and Triple DES are old. DES isn’t secure, and Triple DES doesn’t seem to offer much given Rijndael and RC2 being much faster than it.

In terms of cipher modes, these classes only seem to support ECB, CFB anc CBC. ECB is generally regarded as being a poor mode – it’s not very secure. CFB was typically slower than CBC, and as Microsoft have already implemented the classes, some of the advantages CFB (i.e. encryption and decryption being identical operations) have been lost.

So, then examining Rijndael in CBC mode, well, there is little penalty for using 256 bit keys or 256 bit blocks. However, it’s probably worth sticking to 128 bit blocks as 1) it is plenty, and 2) it is AES compatible.

All in all, I was surprised by how similar a lot of the results were for different algorithms, and I was surprised by how slow some of the CFB mode operations were.

To be honest, I can’t really think of a reason not to use Rijndael with 128 bit blocks, in CBC mode. Unless time is a really critical factor, 256 bit keys are stronger. Finally, the RijndaelManaged class in the framework is a managed class, rather than a wrapper for a COM object.

So, the winner is Rijndael!

Comments from my old blog:

This is a very informative article. I have just started looking into encryption, and I have come across nothing on the internet that is as concise as your article.
Will you be doing something similar with asymmetric encryption as well?

By Firoz at 06:22:04 Friday 9th February 2007

Yup, well, at some point. The truth is, in the .NET 2.0 framework, there isn’t a lot of other asymmetric algorithms. RSA is about it. I think in the .NET 3.0 framework there is elliptical curve, and that would be interesting…

So, yes, when I get around to it.

By Andy B at 10:09:43 Friday 13th April 2007

Benchmark: Speed of Encryption and Decryption using .NET Framework classes