Michelle Springer, Beth Dulabahn, Phil Michel, Barbara Natanson, David Reser, David Woodward, and Helena Zinkham over at the Library of Congress have (publicly) released a very in-depth report on their experiences in the Commons on Flickr over a 10 month period.
Titled “For the Common Good: The Library of Congress Flickr Pilot Project” it explores the impacts of the project on access and awareness, traffic back to the LoC’s own website, and, importantly, what they have learned about how collections might operate in the broader social web. Given that their pilot was born of a need to explore the opportunities and challenges of the social web, their findings are important reading for every institution that is dipping their toes in the water.
The Flickr project increases awareness of the Library and its collections; sparks creative interaction with collections; provides LC staff with experience with social tagging and Web 2.0 community input; and provides leadership to cultural heritage and government communities.
I am impressed by the depth of the report and the recommendations. Critically they have identified the resourcing issues around ‘getting the most out of it’ and broken these down as a series of options (see page 34).
Even to maintain their current involvement in the project, they have identified a need to increase resourcing. They also identify that ‘just as is’ is no longer enough.
Pro: Modest expense to expand to 1.5 FTE from current 1 FTE (shared by OSI
and LS among 20 staff). Additional .5 FTE needed to keep up with the
amount of user-generated content on a growing account—both in
moderation and in changes to the catalog records (both in Flickr and PPOC).
Con: Loss of opportunity to engage even more people with Library’s visual
collections. Risk of losing attention from a Web 2.0 community that expects new and different content and interaction as often as possible.
The Museums and the Web 2009 programme is now out and registration has started. This year the action takes place in Indianapolis and many of us faraway people are looking forward to checking out the IMA.
If you attended MW last year or the recent National Digital Forum in NZ, or maybe your organisation has had one of my private workshop sessions, you might have heard my rant about the dire problems with how museums ‘measure’ the success or otherwise of their websites and online projects.
My paper on the subject from last year’s MW still stands but now I’ve fleshed the content out to a half day workshop.
This year’s workshop in Indianapolis is now taking bookings and is limited in capacity (unlike last year) and we’re going to be doing a lot more digging into participants’ own sites and I’m hoping everyone who attends will share a month’s worth of data for comparison and analysis purposes.
I’m going to be building this into a solid foundational workshop for basic web analytics as well as a specialised look at the sort of metrics museums, libraries, archives and government web projects need to be engaging with.
If this sounds like it is of interest to you and you happen to be coming to MW09, then register and book a place.
If you were at the National Library of Australia’s annual meeting a while back then you might have spotted Thom Hickey from OCLC mentioning that the Powerhouse Museum has started to use the WorldCat Identities to connect people in the collection to their identity records and library holdings in WorldCat.
Tucked away in the automatically generated metadata (using Open Calais) are some links from people to their World Cat Identities record over at World Cat – if such a record exists. At the moment there isn’t a lot of disambiguation between people of the same name going on, so there are quite a few false positives.
In this example, Geoffrey C Ingleton now links to his record on World Cat Identities.
In the alpha stage all this means is that visitors can now connect from a collection record to the name authority file and thence, on World Cat, to library holdings (mostly books) by or about the people mentioned in that collection record. Later you’ll be able to a whole lot more . . . we’re using the World Cat API and we’ve got a jam-packed development schedule over the next few summer months (it is cooler in the office than out of it!).
Not only does this allow visitors to find more, it also allows the Powerhouse to start to add levels of ranking to the person data identified by Open Calais – an important step in putting that auto-generated metadata to better use. Equally importantly, it opens the door to a whole new range of metadata that can associated with an object record in our collection. Consider the possibilities for auto-generated bibliographies, or even library-generated additional classification metadata.
For those excited by the possibilities offered by combining the collective strengths of each partner in the LAM (libraries, archives, museums) nexus then this should be a good example of a first step towards mutual metadata enhancement.
We’re also very excited about the possibilities that the National Library of Australia’s People Australia project holds in this regard too.
As we’ve been getting a lot of feedback on these here’s another of Jean-Francois Lanzarone’s video montages composed from detail in our glass plate negatives uploaded to the Commons on Flickr. This is the first one he has finished made up of multiple source images.
Again, this is a simple digital storytelling with consumer-grade video software (iPhoto and iMovie), and Creative Commons-sourced music. These don’t take a long time to make either.
More will go up on our Photo of the Day blog in the new ‘videos‘ section. I will only highlight them when new techniques are used rather than re-post each one from now on.
[Oh, and yes Jean-Francois will be choosing some backing other than piano music for the next ones!]
Already images and videos of the exhibition and the launch, taken by members of the public (“the people formerly known as the audience”) are starting to appear online across the social web.
Here’s photos on Flickr and no doubt tomorrow there will be videos on YouTube uploaded by visitors. And over on the fan forums there’s already much chatter. The Facebook page should get a bunch of uploads shortly, and tweets and status updates across the social networks will begin to happen (of course in far lower volume than in the US).
Of course in times past these images and discussions would have been private but now they are public and discoverable. We’ll be keeping an eye on activity over the coming weeks, listening and learning. We’ll also be posting ‘official’ photos soon.
If you swing by the exhibition yourself then make sure you post and tag your photos and comments.
“Facebook”, spray paint on scrap-yard, Bamako – Mali, 2008
Minelli’s Contradictions series, as a short interview on Wooster explains, illuminates the techno-social environment where –
“users are pushed to live in an intense way the abstraction from reality, living technologies only as an idea and sometimes without even knowing their real functions. And this aspect works for the social-networks too. The idealization connected with these experiences provokes a small-but-important detach of the perception of reality and what i want to do by writing the names of anything connected with the 2.0 life we are living in the slums of the third world is to point out the gap between the reality we still live in and the ephemeral world of technologies.”
The last Culturemondo meeting was held in Cuba and focussed on the Americas. It was a timely reminder of the very uneven distribution of digital content and culture, and the ‘alternative modernities‘ under globalisation.
This one takes place in Taipei. The focus, much like Cuba, is on skill and strategy-sharing between those involved with large scale cultural portals (in the broadest sense), web strategists and digital culture policy makers. This time, though, the Culturemondo roundtable focusses on the Asia-Pacific and the stellar set of projects emerging from this region, as well as new initiatives in Africa and India.
I will be blogging the event as it happens here on Fresh & New so stay tuned next week for reports as the action, ideas, and conversations unfold.
One of the best things I saw at the National Digital Forum in Auckland last week was DigitalNZ. Being a Kiwi myself, I am immensely proud that New Zealand has leapt forward and produced a federated collection product that aggregates and then allows access through a web interface and an open API. That it has brought together very disparate partners is also very impressive.
I spoke to the team behind the very choice project who are based at the National Library of New Zealand – Virginia Gow ,Courtney Johnston, Gordon Paynter, Fiona Rigby, Lewis Brown, Andy Neale, Karen Rollitt – who all contributed the following interview.
Tell me about the genesis of DigitalNZ?
Digital New Zealand Ā-Tihi o Aotearoa is an ongoing programme that is in implementation project phase. It emerged as a response to the difficulties many New Zealand public and community organisations faced in making their content visible to New Zealanders amid the swell of international content available on the Web. In 2007 it received four years government funding as a flagship action of the New Zealand Digital Content Strategy, Creating a Digital New Zealand.
The Wave 1 implementation project has been led by National Library but is a very collaborative project. We’ve got representatives from education, culture and heritage, broadcasting, geospatial information, Te Puni Kokiri (Ministry of Māori Development) and the National Digital Forum on our Steering Committee. The project began earlier this year and then really ramped up in June 2009. The project aimed to set up the ongoing infrastructure for the programme and to deliver with exemplars that demonstrate what is possible when there is concerted work to improve access and discovery of New Zealand content.
We’ve taken a test lab approach – we’ve identified and worked on potential solutions to some of the many issues that prevent access, use and discovery of New Zealand digital content. Some of these areas have included licensing, metadata quality, improving access to advice around standards, formats and protocols and the development of a draft framework to help organisations prioritise content for digitisation.
It is important to us that DigitalNZ isn’t seen as ‘just another website’.
We are working with New Zealand organisations, communities and individuals to aggregate their metadata and help make hard to find content available for discovery, use and re-use in unique ways. We’ve developed three innovative tools that are ‘powered by Digital New Zealand’ and fuelled by the metadata and content from the many different content providers that we’re working with.
DigitalNZ is made up of:
1) A customisable search builder lets people design their own search widget to find the type of New Zealand content they are interested in – be it antique cars, pavlova or moustaches! People can flavour it and embed it on their own blogs and websites. We developed this to show new ways for people to discover and interact with New Zealand content and we especially wanted people to use the tools how and where they wanted.
2) New Zealanders can craft their own commemoration of the 90th Anniversary of the Armistice using the Memory Maker – a tool that lets people remix film, photographs, objects and audio clips into a short video that can then be saved, shared, and embedded. This example is helping us show what is possible when you can free the licensing of publicly available content for reuse and remixing.
3) We’re using ‘API Magic’. We’ve developed an open API that enables developers to connect the metadata that fuels DigitalNZ with other data sources, enabling new digital experiences, knowledge, and understanding.
How did you manage to get each of the content owners to buy in to the project?
By lots and lots of talking, visiting, sharing and blogging!
We started by identifying and contacting a wide range of New Zealand content providers, building also on our existing professional networks and contacts as far as possible because time wasn’t a luxury on this project.
It was hard work because DigitalNZ was a completely abstract concept for many content providers until a few weeks ago. We didn’t even have that snazzy diagram explaining how it all fits together until we had gone live!
[That’s a cool magic hat!]
So we basically just committed ourselves to communicating (and communicating and communicating), being open with our information and honest about what we did and didn’t know each step of the way, and helping people out so it was as easy as possible for them to participate.
Content providers took different amounts of time to reach an ‘ah ha’ moment with us and to realise what this could potentially mean for them – “OK, so you’re like a giant free referral engine to my content” or “So I could basically use your tools to make my own search box for my site”. At the end of the day we aren’t doing this for us!
Face-to-face meetings were the most effective, as it meant we could just chat with people about the issues and problems we are all trying to solve. It was a great way for us to learn about people’s content too.
But we also glued ourselves to our inboxes and set up a private DigitalNZ content blog so content providers could talk directly to each other. The discussion of issues around licensing, for example, was great because it meant we didn’t have to do all the thinking and talking!
The private blog also allowed us to share sneak previews of wireframes and functionality that helped us build a better picture of what we were doing.
In the end we actually got more content providers to take a leap of faith with us than we were able to process in time for launch. There is a real commitment out there to increasing access to and use of New Zealand content. We just convinced people to take it a step further and try something new.
What technologies are you using behind the scenes?
The DigitalNZ Harvesting model is best described by this diagram that our Technical Architect Gordon Paynter has whipped up.
We started out hoping that OAI-PMH would be the best way to harvest. However, very few organisations are set up to do this and it was clear that we had to work on something easier. So we then worked setting up harvesters using XML Sitemaps and also for RSS feeds. The majority of our content contributors are using the XML sitemaps option.
The DigitalNZ system is in 3 parts: a backend, a metadata store and a front end. The backend harvester is written in Java and some legacy PHP (based on the open-source PKP Harvester project). The metadata store is in MySQL, using Solr to power the search. The front end, including the API, the website, widgets and so on, are in Ruby on Rails. The also uses the open source Radiant CMS.
We’ve also set up a DigitalNZ Kete for organisations to upload any content that doesn’t have a home elsewhere on the web. Kete (basket in Māori) is an open source community repository system (built on Ruby on Rails) that we can harvest using OAI-PMH.
One of the great things about Kete is the built-in Creative Commons licensing structure – our ‘technology’ (in the sense of tools) for dealing with the issue of uncertainty around what people can and can’t do with content.
We extended this by adding in the “No known copyright restrictions” statement as well – taking a leaf out of the wonderful Flickr Commons book. A number of Aotearoa People’s Network Libraries are using Kete to gather community treasures and we are including that content in DigitalNZ as it comes online.
The Memory Maker uses the Ideum Editor One which we have sitting on our server in Christchurch, New Zealand.
We’ve worked with three vendors (Boost New Media, 3Months and Codec) and have taken an agile development approach using Scrum. This was very successful way of working and it enabled us to complete our development with in 16 weeks from go to whoa. It was fast, furious and an absolute blast!
The search widget is really great – how are you expecting this to be used?
We think that it is going to be of really useful in education for teachers to use to define project resources or for kids to build into their own online projects. We also see application for libraries, museums and other organisations to use for setting up ‘find more’ options relating to specific exhibitions, topics or events. We’ve also had feedback from some content providers that they are considering it as their primary website search. We’re pretty delighted with that! We also really hope to see some unexpected uses as well.
Tell me something about the Memory Maker?
We think that these guys can tell you about the Memory Maker much better than us!
We ran the ‘Coming Home’ Memory Maker campaign to demonstrate what is possible when content providers ‘free up’ selected public cultural content for people to remix with permission; and used the remix editor to deliver the content to users. We filled the Memory Maker with content relating to celebrations for the 90th anniversary of Armistice Day on 11 November 2008. A number of National Digital Forum partners provided the content and the Auckland War Memorial Museum has been the wonderful host.
We’ve been delighted to watch as schools and other web users make their own commemorative videos out of New Zealand digital content – not by stealing it, but because they know they are allowed to and we made it easy for them.
Our detailed case study of the Memory Maker project describes all of the details and issues we worked with.
We’re hoping to work with others on new remix campaigns in the future.
What sorts of mashups are you hoping other developers will build using the API?
We’ve got a couple already – check out the Widget Gallery for Yahoo Pipes mashups of the DigitalNZ search combined Flickr and also a headlines search of StuffNZ (NZ website of newspapers and online news) over the DigitalNZ metadata.
We don’t have any specific expectations – just excitement about what is possible. We want to be surprised by what people come up with. The whole point of putting the open API out there is to drive others to make new, exciting things with the content that we’ve made available. DigitalNZ wants to share the love!
Go ahead and make us and our content providers happy.
We’re hoping that when people develop new things they’ll let us know so that we can make it available to others through the widget gallery and share it with others.
What other kind of work is DigitalNZ doing?
Another very important aspect of DigitalNZ is that we want to work with NZ organisations to improve understanding and knowledge about how to make their content, easier to find, share and use. One of the issues that we’ve come up against was metadata quality. The search tool has shown that search results can only be as good as the quality of the metadata that goes in. Working with people to improve their metadata will make the API stronger and also the discovery experiences for people.
The Contributors’ section of the site provides guidance on how to participate in DigitalNZ as well as good practice advice on content creation and management. The good practices guide are being developed across all aspects of the digital content lifecycle: through selecting to creating, managing, discovering, using and reusing (including licensing) as well as preservation. We’re interested in hearing from people that might be able to share expertise and perhaps help build up the material on the site.
We’re also working on an advisory service that will provide support and guidance across the spectrum of content creation and management issues that organisations are facing. This will be developed further over 2009 and we hope to include information, training and development, peer support, discussion forums, as well as draw on the knowledge of collective experience and wisdom out there.
Earlier this year the Horizon New Media Consortium convened in Australia to develop up a Horizon report specifically for the local education space.
The report, detailing six technologies in the education sector to watch, has been released.
Here’s a snippet – but I encourage you to read and then send around. Remember these are technologies that are yet to ‘jump the chasm’ so there will be some contestation of the findings (there certainly was in the development meetings!).
We find ready examples of established use on campuses of the two technologies that appear on the nearest adoption horizon, virtual worlds & other immersive digital environments and cloud- based applications. Those in the mid-term horizon, geolocation and alternative input devices, are are already commonly in use in the consumer world, and educational examples are not difficult to find on campuses working on the leading edge of technology. As would be expected, the furthest horizon contains the two topics that have been least adopted: deep tagging and next-generation mobile. Even in this horizon, examples of campus use do exist, although they tend to be in the early stages of development.
Two interesting pieces of reading for those of you who have to spend time on public transport.
First from the Research Information Network in the UK comes a report that looks at the need of academic researchers in discovering the content of museum collections using online databases. Not surprisingly “their most important wish is that online access to museum databases to be provided as quickly as possible, even if the records are imperfect or incomplete”. Read the report.
Second, and covering a totally different audience, is the long awaited report from the Macarthur Foundation on Digital Youth. This was a major piece of research involving a lot of different research teams and the final report is really quite excellent. If you are time poor then skip straight to the summary white paper (PDF).
Otherwise take the time and read the full report. I’d direct F&N readers immediately to the chapter entitled Media Ecologies. This chapter is particularly important because it reminds us that even the same young person can use different digital media in widely differing ways, and with different proficiencies. This chapter proposes that there is a distinct difference between use of digital media that are friendship-based versus those that are interest-based (in the minority). Often in the cultural sector we conflate these two groups or expect that the friendship-based users are actually interested in our interest-based content.
One of the projects I mentioned in one of my workshops at the NZ National Digital Forum was Boxee. I was alerted to Boxee by Shannon O’Neill only a night or two ago via his RSS.
Boxee is a good example of the important social side of media use and consumption. It is also a good example of connected media.
At the base level your media files are indexed and connected to their cover art, lyrics, reviews and other metadata from across the web. More importantly, though, the service enables the social element of synchronous watching/listening with your friends. You can alert you friends when you are watching or listening to something, and you can simultaneously watch/listen with your friends.
This ‘sociality’ gets to the core of what media is about – it is about content and the social relationships and meanings that form around that content.