I’ve been thinking about how libraries can become better at serving researchers and expert users, especially through their front pages. A recent news item from the University of Tennessee library prompted me to make the connection between expert users and library home pages. The University of Tennessee library just announced the launch of their One Search box on the home page. I found this quote especially disturbing “launching a major upgrade to the Libraries’ discovery portal: the search box in the middle of library homepages will yield exponentially more results than in the past.”
Is there any serious researcher who wants to get “exponentially more results” to a search query? I don’t want that. What I want is better results, something with quality and relevance to the particular topic I’m looking for. In some novel areas of research it may be possible to construct a search query that returns all and only relevant results. For example, when I was doing my dissertation research on citizen science I was able to get almost all good results and very few irrelevant results during my searches of Lexis-Nexis and Web of Science. I did this by using whole phrase searches (quotations around phrases) and occasionally a boolean limiter to look only in titles, authors, abstracts or other specific metadata fields. I know that many of my colleagues were working on topics which were not nearly as simple to search for so limiting searches with booleans or phrases becomes cumbersome. A topic like “trust” will return thousands of articles in many different disciplines most of which are irrelevant. In those cases being able to limit a search to a particular topic database may prove more fruitful.
I avoided a lot of problems because I’m an advanced searcher. I went directly to databases which I knew would be relevant to my field of study - communications and information science - which helped to narrow my results to particular subsets of scholarly communications such as journal articles or newspaper sources. In fact a big part of the research process was finding the correct database to use for my queries. I might have started my research by looking in general search engines, such as Google or doing a simple search through the library catalog, but I never relied on those results to be the definitive guide for my research. I always wanted to, and needed to, go further and deeper.
I think libraries do a disservice to their expert patrons by putting everything into a single search box . For me, a one search box with exponentially more results is an automatic failure. I wrote a week ago about a speech by Andrew Abbot, a sociologist, who described his circular search process in order to find out when researchers began to become disconnected from the library. According to Abbot, humanities and social science researchers disconnected from the library during the 1920s during the wave of library centralization which closed many departmental libraries in favor of large centralized libraries. From an administrative point of view centralization made a lot of sense because it could potentially save money, but from a researcher’s point of view it was a disaster. The connection between teaching and research declined and the ability to easily find a reference by walking down the hall vanished.
The internet and world wide web have altered this situation. On the one hand the web has made it easy to have reference materials close at hand. They can be found with a basic web search. On the other hand it has meant that the number of outlets for information has increased dramatically. Doing a systematic literature review is now almost impossible for a single researcher. Berrypicking, a la Marcia Bates, seems a much more likely model for how most researchers approach the problem of information finding. There are some cases where systematic reviews are done, usually with the help of specialized information personnel, such as medicine. But the time needed for those reviews is significant and most researchers don’t have the resources to support those efforts.
What researchers do need is a way to limit what portions of the firehose of information they are looking at. One of the best ways to do this is through dedicated subject databases. But most library pages bury the links to these databases two or three deep. I wonder how much more effective libraries might be in attracting expert researchers to their home pages by moving away from a one-box-to-rule-them-all approach toward a customized home page tailored to the needs of each individual researcher. A chemist might see links to Chemspider while a rhetorician sees links to MLA or some other database. Libraries could do this. The technology is available. It’s like Facebook and their newsfeed filtering.
The fixation upon single search boxes is driven by a number of factors. There is the continuing hope for centralization and control. Funneling everyone through a single interface may make life easier for the reference librarians. There is the wild success of Google which transformed the search world and has now become the default expectation of many users. And there is also the sometimes quixotic need for usability and simplicity.
I think centralization and control are becoming increasingly difficult for the library to manage. The distribution of data away from the library continues apace. The post I wrote last week discussed some comments by John Unsworth which suggest a trend toward data communities centered on particular types of research data such as the HathiTrust.
Google is a valuable lesson but it’s not clear to me that it applies to libraries. Google benefited from a relatively homogenous set of information types - namely HTML pages. Each page had limited metadata and could be judged through a relatively simple algorithm like PageRank. The modern research library by contrast deals with an incredibly heterogenous set of data - books, articles, proceedings, microfilm, archives, etc. All of these need to be cataloged and it’s not at all clear that they can be lumped together usefully.
Finally many of these single search boxes are justified through usability studies. I’m sure users will say they want something like Google but I’m not convince we should give it to them just because they ask. Sometimes a specialized tool is complicated because the task is challenging. To expect research to be easy seems foolish to me.