Papers, presentations and audio are available at http://www.nla.gov.au/lis/stndrds/grps/acoc/papers2010.html
Keynote Address – David Lindahl – eXtensible Catalog
http://www.extensiblecatalog.org/
This very interesting talk focused on an Open Source product called XC or the eXtensible Catalog.
- The development team for his company is made up of an anthropologist who studies librarians and library users, a librarian and an IT expert.
- The software is designed to empower users to study their clients as well as the way they use library systems
- XC is an Open Source catalogue or OPAC which is designed to link in with all major library systems. It has been developed on the Open Source platform ‘Drupal’ and is easily customisable without having to know any coding. It looks similar to what the current EBSCO interface looks like, but as it is easily customisable, it could look like basically anything we wanted. A sample look of the opac is available in the powerpoint presentation.
- Kyushu University has an instance called Cute.Catalog http://search.lib.kyushu-u.ac.jp/catalog/en . A results set showing how a student can limit their result is available at http://search.lib.kyushu-u.ac.jp/catalog/en/xc_browse/search/7/%20%28murder%29%20_rows%3A%2820%29?search_type=browse
- XC also provides a ‘Metadata Services Toolkit’ module which presents a faceted browse/search of problems with data. It was very interesting in that it could pick up various coding problems and then tell you similar records with the problems. You could then use the facets to see if there was a particular ‘theme’ amongst these problem records so you could perform mass changes. The facets also helped visually view problems such as spelling errors.
Karen talked about ‘Discovery Layers’, a single source/page/search engine that will search across lots of library holdings. An example would be SerialsSolutions which searches across a number of our database platforms (although they aren’t as seemless as some others).
- SLQ implemented OneSearch from Primo (I think). She talked about their experiences.
- Mentioned that data quality is important when implementing a platform like this, all kinds of coding errors stood out (very obviously).
- Mentioned problems when having local solutions that were non-standard, such as coding a set of databases in a certain non-standard way which caused them all to be displayed as ‘books’ in the system. Also mentioned not performing changes on old records when major areas changed (ie when the form subdivision in subjects changed from $x to $v).
- There are ways of finding such things in the local system and doing mass changes, but these are very time consuming.
- She also mentioned that the TAFE libraries in Qld would never be able to do this due to what they include in their catalogue data, but that SLQ had a history of high quality data and this was essential for successful use of this program.
- The entire talk questioned whether we should catalogue things differently to work with a discovery layer, but her opinion was that we should catalogue correctly and to standards in the first place and that things will work out.
A rather ‘interesting’ talk on having your users contribute catalogue records. I could feel the tension in the room as people listened to her. I thought it had merit though.
- Library catalogues are not where people go for information because it is out of date and the terminology is bad. Also mentioned that no thesaurus is neutral and that our library catalogues are filled with loaded terminology, so why not embrace this and use common terminology instead of our unwieldy thesauri?
- SLWA have created a site where their users can catalogue their ‘artefacts’ directly into the system. They fill out a form, select some dropdowns and tada a marc record is created in the background. Use of extensive notes fields combined with URLs to link pictures to catalogue records.
- Shown an example of someone who uploaded 6 personal photos in a series and annotated what the photos were about. The annotations showed ‘voice’ and depth to the information that would have been lost if catalogued by a librarian.
- Interesting example of people powered cataloguing of indigenous artefacts at www.irititja.com
Another ‘interesting’ talk, this one on the uselessness of LCSH and other major thesauri out there. Less tension as he was incredibly amusing, but still quite controversial.
- Search engines are wonderful because they allow users to type random words in a box and receive help.
- Google pre-empts the search of a user (as if by magic) and that’s what library users are beginning to expect. Google also answers simple questions quickly, but doesn’t do so well with the more complex questions. Library catalogues are worse at it (if you are looking at the slideshow, see the pages on railways society mid west nineteenth century
- Library catalogues need to increase what’s in their catalogues.
- Presented findings from a paper by Cory Doctorow in 2001 who said that metadata was metacrap, due to:
- People lie
- People are lazy
- People are stupid
- Mission: Impossible (aka people lie, are lazy and stupid)
- Schemas aren’t neutral
- Metrics influence results
- There’s more than one way to describe something
- Showed examples of a book having 4 subjects in a library catalogue, 7 in Wikipedia, 40 tags in LibraryThing and 29 tags in Amazon (plus reviews, plus references to citations), then google scholar had references to even more citations
- What do we do? Stay in our small insular world with LCSH and RDA or expand and embrace new things (tagging/open source/having clients do descriptions)
- No plan for a national licence of RDA
- RDA records may begin appearing on OCLC soon, we need to watch out for them
- Training materials for RDA will all be on the ACOC website at http://www.nla.gov.au/lis/stndrds/grps/acoc/rda.html
- RDA implementation has been pushed back until second quarter 2012.
- NZ cataloguer’s wiki will also have some information – http://nznuc-cataloguing.pbwiki.com/RDA-page
Koha is an open source library management system, Kete is a commercial addon for Koha
- CALYX have integrated Google Translate into an opac. Standard Koha allows you to customise the interface to be in different languages, but this doesn’t alter the catalogue data. The Google Translate widget allows the full details to be translated (by machine) including all bibliographic information.
- Example is Emmaus Bible College’s catalogue at http://library.emmaus.edu.au/ - they joined with a Korean school and this allows some level of usage by the 100% Korean speaking students
- Very easy to embed a widget with Koha and this was demonstrated
- Also mentioned Forvo, this is a translation pronunciation tool. Native speakers record themselves saying a particular word, and then it can be embedded into catalogue records. Alliance Francais’ catalogue has this feature in some parts of it.
Sharron proposed advertising library services to the parent institutions such as:
- Inventory control for the AV department
- Software tracking for IT
- Create brief bib records for software
- Use fields to represent usage
- Loan it out to IT staff if they have it so it’s easily traceable
- Registering copyright material (ie docuteck)
- Create bib records containing all needed data
- Ie illustrations/length/pagination
- Ensure it is standard
- Easily pulled out for reports at audit time
- Photography department
- Inventory
- Asset control
- Holdings are suppressed from display
- Records photography equipment as well as an inventory of important photograph collections held
- Artwork department
- Register artworks (ie create bibs)
- Link to pictures
- Have locations for which stores they are in or where they are being displayed
- No accessible opac due to copyright of images
- Talked about supporting the parent institution in whatever they need to do as the library has the skills and software needed.
Brad talked about analysing web transaction logs to work out what a user of the system has been doing. This is a project the NLA is working on and they are not finished.
- The NLA collects multiple logs (catalogue, web, content, user database) so this makes it difficult to track things
- Once you screen out all the noise (downloading images, requesting style sheets, search engine crawlers) you are left with quite specific data on what a user has done
- Can work out where they’ve come from, what they’ve clicked on, types of searches they’ve done, if they’ve been to other parts of the website, sometimes even where they’ve gone to
- When aggregated can see what users are commonly clicking on (or never clicking on).
- It involves a lot of work and a high level of experience in coding.
- Multilingual dictionary of cataloguing terms and concepts has been developed (I didn’t see by whom, sorry)
- 008/23 and 008/29 have new fields for electronic resources to describe databases vs websites
- Cookery subject heading has gone (YAY!) to be replaced by cooking. See www.loc.gov/catdir/cpso/h1475.html (Kath sent an update about this but I must have missed it, it is very exciting for me coming from a public library background).
- Library of Congress Genre and Form Thesaurus has been created. This will go in to 655 #7 |aFORM HEADING|2lcgft . These will be authorised headings for the format of items, instead of the free text 655 that most people seem to use.
- DDC 23 is being released mid next year. It will be tangerine in colour
- Major changes to law (340) and religions (200).
- Law has been changed to reflect differences in European vs British vs Australian vs US law. Updates to areas that have common terms that don’t mean common things.
- Religion will have some changes to Islam currently, but a supplement will be released after DDC23 incorporating major major changes such as equalising the number schedule to include all major religions and removing the huge Christian bias.