Well, Wolfram Alpha is another nail in the coffin of the value of ‘raw data’ on the internet. And another reason why museums (and everyone else) need to emphasise interpretation, value add, and the ‘experience’ (Max Anderson’s ‘the visceral’). The raw materials will increasingly be free, easy to find, and ready for recombination and building upon. (Another reason why if you are not seriously cataloguing, documenting and digitising you are going to become invisible)
I’m impressed with my initial fiddling around.
Once upon a time you would have found it best to visit the Sydney Observatory to find out where Beta Centauri is in the sky. They would have given you a sky chart – which you can now download monthly from our site with accompanying podcast, or buy the annual Sky Guide book.
Of course, you’ll still find the Observatory a great place for a nerdy date or to get a go on the big telescope, and savour the experience of the historic building and unique location.
Now for the sky and factual data I can just go to Wolfram Alpha and do this search. Notice it has given the result relative to my geographical position and the time in my location. Equally impressive is the ability to see the sources used to generate the information (critical in establishing trust), and the ability to download the result as a PDF.
Now go and try it with people, places and things . . . .
You’ve probably noticed Google has also done some nifty new enhancements to their search.
(Disclaimer – this is a rushed post cobbled together from equally rushed notes!)
Like most years, this year’s Museums and the Web (MW2009) was all about the people. Catching up with people, putting faces to names, and having heated discussions in a revolving restaurant atop the conference venue in Indianapolis. The value of face to face is more the case for people travelling from outside the USA – for most of us it is the only chance to catch up with many people.
Indianapolis is a flat city surrounded by endless corn fields which accounts for the injection of corn syrup into every conceivable food item. No one seems to walk preferring four wheels to two legs – making for a rather desolate downtown and a highly focussed conference event with few outside distractions.
The pre-conference day was full of workshops. I delivered two – one with Dr Angelina Russo on planning social media, and the other and exhausting and hopefully exhaustive examination and problematising of traditional web metrics and social media evaluation. With that out of the way I settled back and took in the rest of the conference.
MW2009 opened with a great keynote from Maxwell Anderson, director of the Indianapolis Museum of Art. Max’s address can be watched in full (courtesy of the IMA’s new art video site – Art Babble) and is packed with some great moments – here’s a museum director who gets the promise of the web and digital and isn’t caught up in the typical physical vs virtual dichotomy. With Rob Stein’s team at the IMA the museum has been able to test and experiment with a far more participatory and open way of working while they (still) work out how to bring the best changes into the galleries as well.
After the opening keynote it was into split sessions. Rather than cover everything I saw I’ll zero in on the key things I took away cribbed straight from my notes. I’ve left a fair bit out and so make sure you head over to Archimuse and digest the papers.
Using the cloud
In the session on cloud computing Charles Moad, one of the IMA developers, delved deep into the practicalities of using Amazon Web Services for hosting web applications. His paper is well worth a read and everyone in the audience was stunned by the efficiencies, flexibility (suddenly extra load? just start up another instance of your virtual servers!), and incredibly low cost of the AWS proposition. I’m sure MW2010 will have a lot of reports of other institutions using cloud hosting and applications.
Following Charles, Dan Zombonini from Box UK who works with, but isn’t in the museum sector showed off the second public iteration of Hoard.it. Last year Hoard.it caused a kerfuffle by screen scraping collection records from various museum collections without asking. This year Dan provoked by asking what the real value of efforts like the multimillion Euro project Europeana is? Dan reckons that museums should focus on being a service provider – echoing some of what Max Anderson had said in the keynote. According to Dan, museums have a lot to offer in terms of “expertise, additional media, physical space, reputation & trust, audience, voice/exposure/influence” – and these are rarely reflected in how most museums approach the ‘problem’ of online collections.
APIs
Last year there was a lot of talk of museum APIs at MW – then in November the New Zealanders trumped everyone by launching Digital NZ. But in the US it has been the Brooklyn Museum’s launching of their API a little while ago that seems to have put the issue in front of the broader museum community.
Richard Morgan from the V&A introduced the private beta of the V&A’s upcoming API (JSON/REST) and presented a rather nice mission statement – “we provide a service which allows people to construct narrative and identity using museum content, space and brand”. Interestingly, to create their API they have had to effectively scrape their existing collection online!
Brian Kelly from UKOLN talked about an emerging best practice for the development of APIs and the importance of everyone not going it alone. Several in the audience of both Richard and Brian’s sessions were uneasy about the focus on APIs as a means for sharing content – “surely we already have OAI etc?”. But as one anonymously pointed out, yes many museums have OAI but in not publicising and providing the easy access OAI is really ‘CAI’.
And APIs still don’t get around the thorny issues of intellectual property. (I’ve been arguing we need to organise our content licensing first in order to reduce the complexity of the T&C of our APIs).
As Piotr from the Met and author of the excellent Museum Pipes shows time and time again, the real potential of APIs and the like is only really apparent once people start making interesting prototypes with the data. Frankie Roberto (ex-Science Museum and now at Rattle) showed me Rattle’s upcoming Muddy service – they’ve taken Powerhouse data and done some simple visualisations.
APIs from a select few museums will probably put the rocket under the sector needed to really open up data sharing – however we need some great case studies to emerge for the true potential to be realised.
Geolocation
Another theme to reach the broader community this year was geolocation. Amongst a bunch of great projects showing the potential of geo-located content for storytelling and connecting with audiences was the rather excellent PhillyHistory site. The ability to find photos near where you grew up has resulted in some remarkable finds for the project as well as a healthy but of revenue generaton – $50,000 from the purchase of personal images.
Aaron Straup-Cope, geo-genius at Flickr delivered another of his entertaining and witty presentations where he covered some of the problems with geo-coding. In so doing he revealed that most of the geo-coded photos on Flickr are in fact hand geo-coded. That is, people opening a map, navigating to where they think they took the photo, and sticking in a pin. The map is not the territory – my borders of my neighbourhood are not the same as yours and neither of ours are the same as those formalised by government agencies. This is the case as much for obvious contested territories as it is for local spaces. The issue for geocoders, then, is how to map the “perceptions of boundaries”. Aaron’s slides are up on his blog and are worth a gander – they raise a lot of questions for those of us working with community memory.
Galleries
Nina Simon made her MW debut with a fun workshop challenging all of us in the web space to ‘get out our (web) ghetto’ and tackle the challenge of in gallery participatory environments. Her slides (made using Prezi) covered several examples of real-world tagging, polling, collaborative audience decision making and social interactions. The challenge to the audience to “imagine a museum as being like . . . ” elicited some very funny responses and Nina has expanded on her blog.
I don’t entirely agree with Nina’s call to action – the nature and type of participation and expectation varies greatly between science centres, history museums, and art museums. And there are complex reasons as to why participatory behaviours are sometimes more obviously visible online – and why many in-gallery behaviours are impossible to replicate online.
But the call to work with gallery designers is much needed. All too often there is a schism between the teams responsible for online and in-gallery interactions – technologically-mediated or not.
Kevin von Appen’s paper on the final day complicates matters even more. Looking at the outcomes of a YouTube ‘meet up’ at the Ontario Science Centre, Kevin and the OSC team struggled with working out what the real impact of the meet up was. Well attended and with people choosing to fly in from as far away as Australia it would have seemed as if 888Toronto888 was a huge success, however –
Clearly, meetup participants were first and foremost interested in each other. The OSC was the context, not the star. Videos that showcased the meetup-as-party/science center-as-party-place positioned us as a cool place for young adults to hang out, and that’s an audience we’d like to grow.
It wasn’t cheap either – the final figure worked out at $95 per participant. Clearly If we want more ‘participatory experiences’ in our museums it isn’t going to be cheap. And if we want audiences to have ownership of our spaces then we may need to rethink was our spaces are.
(As an aside, I finally learnt why art museums have more gallery staff in the galleries than other types of museums – one per room – albeit not necessarily engaging with audiences! According to my knowledgeable source, art museums have found that it is cheaper to hire people to staff the galleries than it is to try to insure the irreplaceable works inside.)
“The switch”
One of side streams of MW this year was a fascination with ‘the switch’. This arose from some late night shenanigans in the ‘spinny bar’ – a revolving restaurant atop the Hyatt. The ‘switch’ was what turned the bar’s rotation on and off and on the final day a small group were ushered into the bar and witnessed the ‘turning on’. Charles, the head of engineering at the hotel, gave us a one hour private tour of the ‘switch’ and the motor that ran the bar – it was fascinating and a timely reminder of the value of the ‘private tour’ and the ‘behind the scenes’. In return, Charles asked all of us plenty of questions about the role of technology in his children’s education and how to get the most out of it.
New discoveries as a result of putting our incomplete collection database online are pretty common place – almost every week we are advised of corrections – but here’s another lovely story of an object whose provenance has been significantly enhanced by a member of the public – a story that made the local newspapers!
Here’s the original collection record as it was in our public database.
If your organisation is still having doubts about the value of making available un-edited, un-verified, ageing tombstone data then it is worth showing examples like these.
Today, whilst Seb was slaving away giving two workshops in a row at Museums and the Web 2009 I spent the day with Jim Spadaccini and Paul Lacey in a great, full-day workshop called ‘Make It Multi-touch’ that showcased the custom built 50” touch-table. You can view it over at Ideum .
We got inside information on how this technology was developed from the initial prototype back in September 2008 that featured a dual mirror and two camera solution that resulted in the need to process complicated gestures and quickly. Two prototypes later is the final product you can see here. This technology can process simple to complex gestures known as ‘blobs’ (fingers reflected) which is fed to software that can process touch, drag and drop, pinch and expand, drawing, rotate and double tap features that are all intuitive to the user within a short time-frame. The aim is to provide an interactive social experience that is very different to the traditional computer based interactive exhibits that can tend to isolate the experience to one visitor.
What can we learn from the public about using museum collections and content through technology such as multi-touch? This form of technology may be a novelty for some at this stage but the future design of this product holds potentials for change amongst many museum applications.
Scenario: Multi-touch tables are available in a museum exhibition for the public to use and interact with exhibition content. Images of collection objects can be moved across the table, details of content can be zoomed in through simple “blob” (finger) movements. Descriptive information about the object can be shown through XMP metadata stored in the file. Location data can be retrieved and the user can create their own exhibit and learning experience. This is a very different user application that can change visitor’s experiece. Do we need to compete with devices that are currently available at home and make it social and educational in the museum? Does fixed navigation work anymore?
Multi touch technology has potential to change museums experience and it will be interesting to watch this technology develop. Will the public start to expect to come to museums to interact with exhibits in this new way?
Backtype has just released the very first 0.1 version of a WordPress plugin that integrates tweets and retweets as well as comments on other blogs into the comment stream of your original WordPress posts.
I’ve been trialling an install and you can see it in action on a post like this one. Notice that the tweets are interleaved with comments on the blog itself – it even deciphers shortened URLs. (And in case you were wondering which URL shortener is the best check out this article from Searchengineland – hat tip Chloe Sasson!)
This sort of cross-site conversation tracking is becoming increasingly important in a world where tweets are easier and more common than on-blog comments. I’ll be watching with interest to see how the plugin evolves.
A word of caution before you go and roll it out on all your blogs – consider the additional moderation that seeing every public tweet and offsite comment is going to create for you!
As regular readers know, we’ve been trialling QR codes and a little while back rolled them on a small selection of object labels in a Japanese fashion display.
I’ve been keep an eye on their usage and some of the continuing problems around lighting, shadows, and low-resolution mobile phone cameras like the current iPhone 3G. So far usage has been, as expected, low. Firstly, the target audience for the exhibition content has, not surprisingly, not been very tech-savvy. Secondly, the ‘carrot’ isn’t clear enough to cause the audience to respond to the call to action.
More critically, one thing we still haven’t quite gotten right is the image size and error correction.
Shortly after the last post we upped the error correction in the codes to 30% (meaning that up to about 30% of the image can be obscured and it still scans – although it is isn’t evenly spread). This alone wasn’t enough.
With the long URLs encoded in the codes plus the error correction the resulting QR codes were even more ‘dense’ and hard to scan with 2 megapixel cameras. We’ve now done another set of codes with our own version of TinyURLs that generate locally. This has reduced the encoded characters from nearly 70 to around 25 characters – thus a far less dense code.
Even so, 2 megapixel cameras have patchy results when obscured by lens flare or shadow so our current thinking is that in the future the codes may need to be as much as 50% bigger.
Since April 8 last year we’ve uploaded 1,171 photos (382 geotagged) from four different archival photographic collections. These have been viewed 777,466 times! For photographs that had been either hidden away on our website (the original 270 Tyrrell photographs on our website were viewed around 37,000 times on our site in 2007), or not yet even catalogued and digitised this is a fantastic result. And that’s not even scratching the surface of the amazing extra information and identifications, mashups, new work and more that has come from the community participation.
The book was published using print-on-demand service Blurb and comes as a softcover or two different hardcovers – it is your choice! Inside there are a range of photographs alongside their individual statistics, user comments and some of the stories of discovery that have come from the first year in the Commons.
I’d personally like to thank everyone at the Powerhouse who have supported our involvement in the Commons and helped make available so many photographs. I’d also like to thank the enthusiastic Flickr community who have so enthusiastically embraced these historical images; Paul Hagon for his mashup;the staff at Flickr (esp George, Dan and Aaron); and the Indicommons crew.
Without all of you this would never have happened.
As many readers know, Paula Bray, our manager of Visual and Digitisation Services, has been working on a paper for Museums and the Web looking at the impact of the Commons on Flickr on our image sales business.
Paula’s paper has been published over at Archimuse and if you are going to be in Indianapolis next week you’ll be able to get the visually enhanced interactive version.
Over on our Photo of the Day blog, Paula has added some updated figures that give a clearer picture of the impact of the Commons. Have a read and feel free to ask questions either here or on Photo of the Day. I’ll make sure Paula gets them.
We are celebrating our 1st birthday in the Commons on Flickr tomorrow and have an exciting announcement waiting . . .
Another exciting thing we are launching today is our Object of the Week blog. It nicely complements our Photo of the Day which recently celebrated 500 posts!
We kick off Object of the Week with a profile of the project lead, curator Erika Dicker. Erika has chosen a favourite object from the collection – a prawn riding a bike, and her quirky tastes are also profiled in a quick Q&A.
Each week the blog will feature a new object and, until each curator has posted, a curator profile. We hope the blog will reveal some of the personalities behind the collection as well as many of the oddities and exciting objects that the public rarely gets to see. In coming weeks there will be video interviews and a whole lot more.
We’re happy to announce that as of today all our online collection documentation is available under a mix of Creative Commons licenses. We’ve been considering this for a long time but the most recent driver was the Wikipedia Backstage tour.
Collection records are now split into two main blocks of text.
Just to be very clear, images, except where we have released them to the Commons on Flickr, remain under license. There’s a lot more work to be done there.
So what does this really mean?
Teachers and educators can now do what they want or need to with our collection records and encourage their students to do the same without fear. Some probably did in any case but we know that a fair number asked permissions, others wrongly assumed the worst (that we’d make them fill out forms or pay up), and it is highly likely that schools were charged blanket license fees by collecting agencies at times.
Secondly it means that anyone, commercial or non-commercial can now copy, scrape or harvest our descriptive, temporal and geospatial data, and object dimensions for a wide range of new uses. This could be building a timeline, a map, or a visualisation of our collection mixed with other data. It could be an online publication, a printed text book, or it could be just to improve Wikipedia articles. It can also now be added to Freebase and other online datastores, and incorporated into data services for mobile devices and so much more.
Obviously, we’ll be working to improve programmatic access to this data along the lines of the Brooklyn Museum API, as well as through OAI and other means, but right now we’re permitting you to use your own nouse to get the data, legitimately and with our blessing – as long as you attribute us as the source, and share alike. We figure that a clear license is probably the ground level work that needs to preceded a future API in any case.
Thirdly, we’ve applied an attribution, non-commercial license to object provenance largely to allow broad educational and non-commercial repurposing but not to sanction commercial exploitation of what is usually quite specific material to our Museum (why we collected it etc).
You might be wondering why we didn’t go with a CC-Plus license?
A CC-Plus license was considered but given the specific nature of the content (text) we felt that this added a layer of unnecessary complexity. We may still, in the future, apply a CC- Plus license to images where it will make more sense given we have a commercial unit actively selling photographic reproductions and handling rights and permissions.