Wednesday, February 28, 2007

Loaderlock hell!

I recently installed VSTS and I'm still trying to learn my way around it.
One thing I noticed when using it was that I kept getting a "Loaderlock" MDA (Managed Debugging Assistant) whenever I tried to run a piece of interop'in code in debug mode. This was getting quite annoying, and it didn't seem like there was any problem in the code - rather in Visual Studio.
Naturally I started googling the problem and I found several places saying that all I had to do was to go to the "Debug | Exceptions" and turn off Loaderlock for the Managed Debugging Assistants (true, there was also a ton of other advices, like modifying the registry or installing vs service packs, but none of them seemed to do the trick). Unfortunatly my visual studio does not have "Exceptions" in the debug menu - or so it would seem....
Luckily my friend Moshe came to the rescue and found out that even though there's no menu-item called "Exceptions" in the debug menu, the Customize dialog will proof that there's supposed to be. And when we investigate further we found the keyboard shortcut to open it: CTRL+ALT+E.
After using this shortcut I was finally able to turn off the MDA that was annoying me in the first place. I wonder what actually caused it...

Thursday, February 22, 2007

Off topic: A Murphy's day

Yesterday I really found out how Murphy must have felt when he formulated his law.
The day as such started out fine. I was planning to go to an afternoon meeting in Stockholm, flying there and back on the same day - nothing to it, I've done it often before.
The meeting was scheduled for 14:00 so my plane, scheduled to arrive at 12:35 in Arlanda should be fine. But then the problems started...

  1. First our plane was delayed for half an hour - no reason given. We rebooked our planned meeting in Stockholm to 16:00.
  2. At 13:oo we were informed that the flight was cancelled due to a container that had gotten stuck in the cargo room, leaving a dent in the plane. We were automatically rebooked to another plane, expected to leave 13:20.
  3. Apperently Copenhagen airports computer system has difficulty dealing with such a massive change - it broke down and delayed the boarding of the new plane quite a lot.
  4. At 14:00 The plane was boarded and ready to leave, but now a fine layer of snow had settled on the wings (it was a blizzard) and we needed to wait to be deiced.
  5. 14:20 deicing was completed, but we were now informed that only one runway was open to takeoffs and landings and we'd have to wait in line to take off
  6. 15:15 Take Off! however after take-off something felt kinda weird. It turned out that snow had gotten stuck in the door to the front landing gear and they couldn't close it. The purser announced that we probably had to land again. After a while they succeded in closing it though and we could speed up and head towards Stockholm Arlanda. At this time we had realized that there was no way we could make our meeting in Stockholm, but on the other hand we couldn't get off the plane.
  7. 16:30 we landed in Arlanda. Went immediatly to the transit center to get rebooked to the next flight home. After 50 min. waiting in line we got rebooked.
  8. Rebooked to plane that was scheduled to leave 16:30, but which was boarding at the time we were rebooked, at 17:20.
  9. At 17:40 the we were boarded at the plane home, but the Captain comes on the intercom to tell us that Copenhagen airport is shut down, and our current timeslot is in 2,5 hours. However to not loose that slot we have to stay in the plane ready to leave.
  10. Around 20:00 the plane take's off towards Copenhagen
  11. At 21:00 we're making our approach to CPH airport, when we're informed that they once again have shut down the only runway due to the blizzard. We have to circle around for another half hour.
  12. 21:35 On the ground. Unfortunatly is there still a computer breakdown in CPH so they havn't recorded our landing, meaning that they havn't sent any ground crew to the gate - so we have to wait for the door to be opened.
  13. 21:55 We're finally in the arrival area of the airport only to find a kilometer long line to the taxi's and no taxi's arriving (the blizzard has closed down the roads). Trains are delayed 1h+
  14. 22:35 I get on the first train bound for Copenhagen central station. This is usually a 10 min. train ride, but due to weather, missing train crew and problems with the train signals we're delayed. We spend quite a lot of time waiting just outside the central station without getting out.
  15. 00:35 We finally arrive at the central train station. However I soon realize there's no way to get home from there. All other trains and busses are cancelled, no taxi's to be seen and lots of other people stranded in the same situation. I finally decide to go the office and wait for better times
  16. 02:30 I finally manage to hijack a taxi (by pure luck) that dares to make it's way north. Half an hour later I get home (having to walk part of the way due to heavy snow).
Geez...I just hate days like that.

Tuesday, February 20, 2007

Mysteries and Mental Models

I've for quite some time been wanting to write a post about a phenomenon I've encountered numerous times, and now I finally persuade myself to put my thoughts down in words. Let's for now just refer to the phenomenon as "Black Box of Mystery" (BBM).

It's a well known fact in the world of computer-human interaction that a lot of usability problems arise when the mental models in the user interface doesn't match the mental models of the user. Then the software seizes to be intuitive and the users stop using it (or at least they'll hate using it). Put in other words, a user interface should behave as if it is what the user thinks it is. So far so good. But now the problems starts pouring in. For users might be at different levels of knowledge and hence have different mental models. And what about that software thats just too complex to be understood?

I have some examples of how people react to BBMs.
When I sit in my car and turn on the wheel, it fits my mental model perfectly that the wheels start turning, and if I'm driving the car will begin to turn. The steering wheel is at least a part of my car thats not a BBM (several other parts are).
In my car I also have a navigator. It would have been a scary BBM to me before I learned about shortest path algorithms and GPS. Now I've luckily learned to accept it, but it did take some adaptations of my mental model. My wife on the other hand doesn't care how it works. She has accepted that it's a black box and just has full faith in it's working. I have tried to ask her how she thinks it can decide on good instructions for her to find home - her answer was simple "'s has a GPS so it knows where I live". I suppose her answer is correct in a way.

My beloved grandmother had a TV, and although she spend most of her time watching it she also claimed to hate it. She was afraid of it - cause she didn't understand it. To her it was one big BBM, and she certainly didn't appreciate the fact that she didn't know how all the little people had gotten inside it. She always needed help to tune it to the right channels, and if it was moved and a cable fell out, she'd call somebody to fix it, terrified of touching the thing herself.

I feel the same way about BBMs I encounter in my daily life. Like the SqlAdapter in the .NET framework. I know Microsoft wants me to use it to connect my datasets to my sql-server, but I don't trust it. It's totally a BBM to me and I'd always prefer to use SqlCommands instead, cause they fit my mental model more. They do what I tell them to, when I tell them to do it - and I can fully understand their purpose.
I guess thats something really tricky when you develop API's and frameworks. It's quite difficult to know the domain knowledge level at all users and hence it can be tricky to match their mental models without making BBMs for some of them.

As a part of MondoSearch we had a similar problem. When we first started making .NET API's to the search engine we faced the problem that all the users implementing it was webmasters with little or no .NET / programming knowledge.
In spite of coding examples and lots of guidelines and manuals our support was flooded with problems caused by bad/wrong code.
Something had to be done so we decided to make a SearchControl that could be put on aspx pages that handled all the typical logic related to having a search and result-page, code that was typically error-prone. Stuff like recreating a search upon postback, navigating in search results, narrowing the search, connecting to underlying search-API and so on.
When we released the SearchControl the non-developers like webmasters and supporters liked it instantly because it empowered them to do a lot of things they would have given up on doing before. But then our audience changed. The world had accepted .NET and that making a web-site was a joint developer/graphical designer/webmaster/??? task and all of a sudden we had developers getting annoyed with our Search Control. Why? Because to them it was a BBM that they didn't dare to use...
We soon after released a web-service that rendered a pretty clean code-wise access to all the search functionality. Today we maintain both interfaces and are in fact trying to adjust the SearchControl to be more "developer-friendly" by making it's actions and functionality more controllable and transparent.
But all in all I guess it helped me to learn a little lesson about Black boxes and their effects on people.

Monday, February 19, 2007

C# and keeping your lists sorted

Every once in a while I feel the need for a having a list in C# thats instantly sorted. And every time I always spend time examining the various lists available in C# and end up being quite annoyed.
Perhaps there is such a list in the .NET framework, but I havn't been able to find it yet.
Sure, there is the SortedList, but thats a Key/Value-style hashtable where it sorts on the keys. Sometimes thats fine, but most often I find that it's pretty useless. Especially since you can't have duplicate keys. You can also use the .Sort() method on an ordinary list of course. But what if you need to have a list that's always sorted, e.g. when you Add a new item to the list it's inserted in a sorted manner?! Calling .Sort() after each .Add() isn't really an option since it's way too slow.

So, as you might have expected, when I had the problem again this weekend I ended up writing my own implementation of a SortList.
It might not be pretty, but it seems to work and compared to the .Add();.Sort() alternative it's pretty darn fast :-)

public class SortList<T> : List<T>
where T : IComparable<T>

public new void Add(T Item)
//No list items
//Bigger than Max
int min = 0;
int max = Count-1;
while ((max-min)>1)
//Find half point
int half = min + ((max - min) / 2);
//Compare if it's bigger or smaller than the current item.
int comp = Item.CompareTo(this[half]);
if (comp == 0)
//Item is equal to half point
Insert(half, Item);
else if (comp < 0) max = half; //Item is smaller
else min = half; //Item is bigger
if(Item.CompareTo(this[min])<=0) Insert(min,Item);
else Insert(min + 1, Item);

The code is based on a standard generic list, and it simply replaces the Add() method with a method that inserts item sorted. It does this by narrowing in on the list, always diving the list in two - much like one would solve the classical game of "Guess the number I'm thinking of".
To test it, I wrote a small test application, that generated an array with 10.000 random integers (values between 0 and 10.000). Then it timed how long it would take to add them to the SortList, and afterwards how long time it would take to Add+Sort them in a usual list (with Sort being called for each Add, cause we need a constantly sorted list).
On my workstation, SortList took about 40 ms to complete the task while List.Add+List.Sort took 2654 ms.
And in case you're wondering, the answer is Yes. The sorted contents was the same in both lists afterwards.

Friday, February 16, 2007

Cool Service: FeedJumbler

I love RSS feeds (well and I don't exactly dislike Atom either). But one of the few annoying things about them is that as soon as you have more than 5-10 feeds that you follow daily you'll run into the problem that the feed-viewers will fill up all your available space on your Google start-page (or whatever portal you might be using).
Use a real RSS reader you might say..But I tried that - and it didn't work for me. I have enough applications open on an average day - no need to have one more. And to use Outlook as rss reader sucks, cause my Outlook is always open on the Inbox - not on the feeds.

So, just as I was about to create my own tool for merging several feeds into one I found FeedJumbler. This is a cool site that allows you to quickly setup a merged feed in whatever protocol you prefer.
To try it out I've made a merged feed of some of my regular favorites and put it in the right-hand pane.
If you want to see a list of the feeds (and to have the ability to add this merged feed to your start-page, see it here).

It just goes to proof that sometimes you don't need to invent everything yourself.

MondoSearch for EPiServer (Part 1)

Last year, while I was creating the MondoSearch for Sitecore integration I was at the same time technical-contact/project manager for the MondoSearch for EPiServer integration. Besides from keeping me busy for half a year, this provided an excellent opportunity to learn a lot about these two state-of-the-art content management systems, each with their own strengths and difficulties.

With the EPiServer I was so lucky to be working with the former (now again current) EPiServer Product Chief, Roger Wirz, through his company Briomera. In the end I was very pleased with the results of our joint work - it turned out to be quite a cool integration of the products, deeper integrated than any other EPiServer search tool I've seen. In November and December I got to travel around Sweden and demonstrate it to EPiServer partners in both Gothenburg and Stockholm. It got a lot of interest, and several customers are already making their own implementations based on the integration.

I've been wanting to share some screenshots of the integration with you all, so here goes.

Just as with the Sitecore Integration, the integration for EPiServer is also based on the MondoSearch Integration Services, which is a set of XML Web Services, that's based on MQL and DataSets.
In the Configuration section it's possible to setup the connection strings and urls to all of the web-services as well as using the Diagnostic tool to check that all services are up and running. This is a handy one-place-stop for trouble-shooting.

If we stay in the Admin section of EPiServer we might draw our attention to the Crawler Control.
This is where you can control the indexer, see crawler logs, manually start a new crawl, and also setup an EPiServer Heart Beat that on regular intervals checks if it's time to start the crawler - and if the last crawl went okay.

When it comes to the actual search implementation, we've adjusted the standard MondoSearch Template 1 to work within EPiServer, and also created a PageType for it.
By adding a User Control with meta-tags to all the pages we're also able to enhance the meta information on the pages as well as categorize either using EPiServers categories, or the built-in MondoSearch Categories.
All text-strings used on the search-page can be found in EPiServer style language xml's and can quite easily be translated.
In the integration we've also included support for 2 authorization methodologies in order to fully support EPiServers authorization. This means that when you search on your EPiServer you'll only get back the results you are allowed to see.

Since the Editor search that comes with EPiServer sometimes can leave you wanting a bit more we also included an Editor Search based on the MondoSearch index of the website. This is an easy way for editors to find the documents they want to edit.

This was a brief introduction to the configuration and searching facilities in MondoSearch for EPiServer. When I have time I'll post some more screenshots of the neat interaction with BehaviorTracking and InformationManager from within EPiServer.

Thursday, February 15, 2007

Improving MOSS Search

One of my colleagues, Lars Fastrup, has started a really nice blog around all the work our Ontolica team is doing in relation to improve the usability and functionality in MOSS 2007.
Recently Lars posted some really nice screenshots of the upcoming version of Ontolica, that'll probably wake the interest with most experienced MOSS users!
His announcement of a lightweight version of Ontolica introducing wildcards as a long-lost search feature in MOSS has certainly made quite a buzz already through many a weblog.

Well done, Lars and welcome to the blogging sphere :-)

Hall of Fame:

Every now and then I come across a search implementation I really, really like.
Some places where people think outside of the customers) on their site. In these days where the search market it being heavily commoditized, and more and more websites doesn't care about the quality of their search functionality as long as they have it, it really fills my heart (I know, I'm turning thisbox in order to help the visitors (and/or into a sob-story) with pride to when I encounter MondoSearch customers which has gone that extra mile to use make something thats cool to use.

One of the MondoSearch implementations that I most often showcase to people wanting to see the real power of good site-search is the solution they have at is a US-based camping gear business, and I think they've made an awesome implementation.

Their solution isn't based on the latest technologies, in fact they still rely on good ol' asp to do the job, but they still managed to put in a couple of really nice features.

Try to go to and search for "tents" or "Coolers" or any other product that you'd be interested in.
Now the first thing you'll see is probably a SearchHeader (a query-related banner-add). This will take you directly to a relevant offer they might be having at the moment - or just shorten your way to the products of your interest. I don't know the internal work flows of Coleman, but I can imagine these SearchHeaders being the result of them analyzing frequent search words on the site and then adding SearchHeaders as a response to it in order to help people searching for the most popular terms.
Underneath the add comes the results, in categories. This is an excellent example on why it sometimes can be a good idea to show results in categories.
In the case where you searched for "tent" it's unclear if you are interested in:
a) buying a tent
b) Getting parts for a tent
c) General information about tents
d) Tips on how to use your tent
e) ...

Luckily Coleman Search presents you with the best results within each category right on the first result-page.
Most people are probably interested in buying a tent, so naturally that category goes on top.
And this is what it all comes down to: Search is all about not wasting peoples time. Don't make people waste time on your website looking for the products they want to buy, bring it to them when they ask for it. And when you present them with a search result, make it easy to pick the right result.
In this case, Coleman helps the users by actually showing a small picture of each tent in their "Products" category, along with the price. And if a users feels like buying a tent right there and then, well - it's no problem - just click the link directly on the result-page and add a given tent to your cart!
If you scroll down the results you'll also see a category of Manuals to the various products sold by Coleman. In this case it's quite helpful that they provide a pdf-icon next to the pdf-documents so the user will know what to expect if they select that link...How many times have I not been lost on a company's website, clicked on a result link and then had to wait for x minutes while firefox desperately was trying to load a huge pdf, when I was just expecting a standard document.
In general I find it's always a polite gesture to tell people what they'll get if they click on a link - and especially warn them if they'll end up with something like a pdf (not that I have any problems with pdf-files :-).

At the bottom of the result-page we find the "Advanced Search" field, for searching again and this is actually the first place where I have a little bit of criticism...This area seems a little bit messy in my eyes. There are no clear Gestalts separating the category selection and the search-type selection, and in my opinion both selections are unnecessary. Since the results are divided into categories, and it's possible to drill-down from the results I think the advanced category selection is redundant (and I bet that only very few people actually use it). The same goes with the Search Type. Here it's defaulting to AND-searches, which can be pretty dangerous. Suppose a visitor searches for "Camping Tent". He'll get significantly fewer results than a visitor searching for "Tent" - because not all of the tent-product pages contain the word "camping" although the tents could probably be used for camping :-)
I tend to prefer OR-searches, given that if a document matches all the search-words it's still ranked better than documents matching only some of the search words.

All in all I think it's a nice search implementation with the only recommendation that more simplicity in the Advanced Search section would be nice. Potentially they could also expand the search to include some search-filters, like "search only for products cheaper than X" - I'm sure some users would find that handy.

Wednesday, February 14, 2007

The worst thing after computer viruses

... are real-life virusses, one of which I seem to have caught. The reason for the shortness of posts these last days is that I've been down with the flu since friday.
"Great, lots of time to code on the Poker challenge" you might be thinking - but alas, no. It's been one of those annoying flu's that makes your head feel like 20 elephants are dancing on top of it to the rythm of your sneezes, and where it's absolutely impossible to do anything but stare empty-headed into the darkness and hoping it'll all be over soon....

Luckily I seem to be doing a bit better now and soon I'll flood my blog with crazy code as always.


Wednesday, February 7, 2007

Structs vs. Classes

Obviously structs are supposedly a lot quicker than classes. However some time ago I heard rumours that in .NET 1.0 some error in the .net framework would cause them to be slower in some cases. Today for some reason I felt like making a small comparison in .NET 2.0.

So, I made a small console program with the following code:

public struct TestStruct
public int A;
public bool B;
//public string S;

public class TestClass
public int A;
public bool B;
//public string S;
public TestClass()

class Program
static void Main(string[] args)
int MaxIterations = 1000000;

DateTime t1 = DateTime.Now;
TestStruct[] ts = new TestStruct[MaxIterations];
for(int i=0;i<MaxIterations;i++)
ts[i] = new TestStruct();
ts[i].A = 42;
ts[i].B = true;
// ts[i].S = "Hey Joe";
DateTime t2 = DateTime.Now;
TestClass[] tc = new TestClass[MaxIterations];
for(int i=0;i<MaxIterations;i++)
tc[i] = new TestClass();
tc[i].A = 42;
tc[i].B = true;
//tc[i].S = "Hey Joe";
DateTime t3 = DateTime.Now;
TimeSpan ts1 = (TimeSpan) (t2 - t1);
TimeSpan ts2 = (TimeSpan) (t3 - t2);
Console.WriteLine("Struct Time: {0} ms", ts1.TotalMilliseconds);
Console.WriteLine("Class Time: {0} ms", ts2.TotalMilliseconds);


I ran it both with the struct/class containing a string, and without.
Here's the results I got from running it (in debug mode):

Structs/Classes without string: 20 ms / 270 ms (= classes are 13,5 times slower than structs)
Structs/Classes with string: 60 ms / 430 ms (= classes are 7 times slower than structs)

The reason I'm trying with and without strings is of course that strings are located on the heap, not on the stack as the other value-types, hence they are quite a bit slower.

It's naturally no surprise that structs are faster than classes - they should be - but it's nice to get an idea of exactly how big a difference there is.

Tuesday, February 6, 2007

New Functionality: Related Articles

Here's one of those small things I've been working on over the weekend.
If you're using IE, you should now be able to see a new widget-like thing in the bottom of the right-hand widgets. It's "Related Codeproject Articles".

The concept I use to get these related articles is somewhat similar to how the "Related Pages" functionality works in the MondoSearch/Sitecore integration - just based on another platform.

I've written a small piece of javascript that extracts the keywords on this page, and then calls a serverside function to perform an MSN Search on the keywords on one my my all time favourite websites, codeproject.

It's still quite experimental so I don't expect it to work in all scenarios - but possibly a few lucky readers will get to enjoy this functionality now :-)

If anyone is interested in more details as to how it's done, drop me a comment - I might be persuaded to share the code...

Monday, February 5, 2007

MondoSearch for Sitecore (Part 4)

Like all good trilogies, this one comes in more than 3 parts :-)

The last little detail in the integration I want to show is the overall architecture. The entire integration between the products are based on 4 key XML WebServices, provided by the MondoSearch products and consumed by the integration within Sitecore.
These are:
  • MondoSearch Search WebService
    Probably the most important WebService. This is the service that handles all the searches. It takes a query in MQL (Mondosoft Query Language), performs a search, and returns results as a dataset. It's used by the search/result pages, the editor search as well as the Similar Pages code example. It's possible to have the integration working only with this service enabled - although naturally all other features than search in the integration will be disabled.
  • MondoSearch Admin/Crawler WebService
    This service can control the MondoSearch crawler as well as doing some essential setup and configuration - like adding starting points, reporting on crawler status, etc. It takes MQL and a connection string (holding user name and password and license key) as input. It's used in the Crawler Control application and the Start Crawler task.
  • BehaviorTracking WebService
    This is the service that extracts all the important information about the users search behavior from BehaviorTracking. Once again it's based on MQL and Datasets which makes it easy and standardized to use. It's used all over - in the BT Portal, Term Details, Related Topics, Item Details, autocomplete searchbox, etc.
  • InformationManager WebService
    InformationManager is typically used by the webmaster or marketing dept. to optimize the search based on user behavior. This could be by adding SearchHeaders (custom pieces of HTML in the top of the search results, based on query), SearchNames (direct link to a specific page for a given search query), synonyms (goes without saying) and so on. The webservice provides easy MQL based access to all these features. However the only feature thats included in v.1.1 of the integration is SearchHeaders - so here's room for improval :-)

Since all of this is based on WebServices it's easy to imagine how you can split up your servers. It's quite easy to have a hosted search solution, as well as hosting the search yourself. You could even host it yourself each product on a different server, and have a fallback hosting scenario setup if company policies requires it.

Another benefit that I find really cool is that the integration leaves room for adding your own components based on the search/behaviortracking functionality, since the classes used to call the webservices are public. Just imagine the possible awesome features it's possible to implement on your site. For instance how about adding a "Personal Suggested Links" box on the front page, based on the visiting users history of searches/browsing on your site?!
Or how about implementing your very own "Local-by-global" search which catches the queries from Global search engines that led users to your site and performs a local search on them, suggesting other relevant pages?!
And the code is pretty simple. In order to perform a search simply write something like this:

using Mondosoft.SitecoreIntegration.Search;

private void DoSearch(){
ServiceWrapper service;
DataSet results;
service = new ServiceWrapper(Configuration);
results = service.ExecuteSearchMql("OPTIONS Query='Sitecore' "+
"LIMITS FirstResult=0 MaxResults=5");


I hope a lot of partners and customers will pick up this challenge and make some really cool implementations of this. Now it's up to you guys :-)

Sunday, February 4, 2007

No more commercials

The frequent visitor to this blog might notice how the commercials have disappeared all of a sudden. I figured I might as well remove them, given that they only generate around 10-15$ revenue a month - and really takes up a lot of space. Besides, although I'm secretly a fan of Google, I see no reason to increase their revenue even further.

However, if the number of visitors picks up dramatically I might become so greedy that I'll turn them back on :-)


These last couple of days I've been busy (as always) coding on several different projects. Probably some of you know the feeling that arise when all of a sudden tons of good ideas emergers in you head at the same time and you can't wait trying them all out to see if they work just as well in the real world as they do in your mind.
I've also kept on coding the Poker library, and I think I've gotten most of the logic right by now. Pretty soon I'll make a short online test of the library and I invite everybody to try it out and see if logic works well enough.
Some of the other ideas I'm working on at the moment:
  • Blog Real Time visitor tracking, enabling me to get an RSS feed with the current visitors on my blog - and possibly even send personalized messages to individual visitors realtime.
  • "Similar Articles" generic javascript code, that will extract keywords from the page it's on, and perform a search on a global search engine for other related articles. I actually got quite far with this idea yesterday but ran into some cross-site xmlhttprequest security issue...
  • My own implementation (or yet another of my own implementations) of a Suffix Tree Clustering algorithms. I'm trying to make it so generic that I can make it available here for download.
  • Map'ed websearch. I'm looking into making Microsoft and Google meet, by using the API for Microsoft Live Search along with the API for google maps and a Geo-IP api in order to show where the results of a websearch are from.
  • (and tons of other projects)
As soon as any of the above is ready for show and tell I'll post them here.