Searching for something online can be quite frustrating if you don’t know how to use your search engine(s). On one such frustrating search engine-eering exercise I bumped into a rather amusing research paper. Well, it may have some applications in search engine design (who knows, nowadays) but it was somehow stupid to me. What do you make of it?
The paper is written by Henry Feild (is that spelled right?) and James Allan (2 good lads from UMass Amherst) called Modeling Searcher Frustration. The basic idea was to model how frustrated a person searching for something is at any given time.
I guess its partly the interesting nature of the topic and partly the silly way in which the research was done that made me write a blog post on this. Also, it shows how bloody easy it is to write a paper which is presented at an international conference. This was presented at the Human-Computer Information Retrieval Conference 2009 (HCIR2009).
I will have some fun here by picking out some obvious points, some stupid methods and lines from the ‘research paper’. I’m really sorry Henry & James, I mean no harm – just have a lot of time on my hands and a chance to poke some fun. Hell, I don’t even know if this will turn out funny!
Looking at how they’ve defined frustration is in itself quite naive. Frustration.
“…We consider a user frustrated when their search process is impeded, regardless of the reason…”
Let me use an oft misused American expression here. No Duh!
Right. So lets look at the Method.
“…To measure frustration, we ask users to rate their level of frustration with the current task up to the current point on a scale of 1 (not frustrated at all) to 5 (extremely frustrated). A user is considered frustrated if they indicate a level of 3 or more. While satisfaction and frustration are closely related, they are distinct. As a consequence, a searcher can ultimately satisfy their information need (i.e., be satisfied), but still have been quite frustrated in the process...”
Well done Sherlock! What better way to know how frustrated a person is that ask him. And also, the experiment (of which you can read more about in the unabridged version of the paper) is conducted on a quite massive scale, making sure it covered all the kinds of people who may search for different queries. 15 undergrad and grad students. My mates from the dorm room?
Anyway, alls well that ends well. I’m sure it was a great learning experience for the chaps who wrote this paper. Also, I might one day be writing papers horrifyingly similar to this one. Maybe it’ll be called “Structure of search queries of Illiterate people“.
I’ll make sure to have a lucid graph at the end with a few hundred points plotted to validate my results.