Categories
Policy

Research into the value of public sector information for cultural institutions

The Australian Government 2.0 Taskforce is calling for quotes for a range of research projects.

Project #6 (pdf) is a quick turnaround project examining the ‘value of public sector information for cultural institutions’. AU$40,000 is available with a quote deadline of 9 September. Report due by 19 October.

Scope:

Assess and quantify the economic and social benefits of making government information held by cultural institutions more widely available. For example, a cost benefit analysis of the social value of the additional outreach of the Powerhouse Museum in releasing various ‘orphan’ works into Creative Commons licensing.

Develop a tool or method to assist cultural agencies in providing open access to information. The tool or method should consider costs for open access, “second-best principles” for pricing data and could also include decision support on intellectual property issues.

Read and apply over here.

Categories
Geotagging & mapping Interactive Media Semantic Web

Introducing About NSW – maps, census visualisations, cross search

Well here’s an alpha release of something that we’ve been working on forever (well, almost 2 years). It is called About NSW and is a bit of a Frankenstein creation of different data sets mashed together by a sophisticated backend. The project began with an open-ended brief to be a cross-sectorial experiment in producing new interfaces for existing content online. In short, we were given permission to play in the sandbox and with that terrain comes a process of trial and error, learning and revision.

We’ve had an overwhelming amount of feature requests and unfortunately have not been able to accommodate them all but this does give us an indication of the need to work on solutions to common problems such as –

  • “can we handle electoral boundaries and view particular datasets by suburb postcodes?”
  • “can we aggregate upcoming cultural events?”
  • “can we resolve historical place names on a contemporary Google Map?”

to name just a few.

There’s three active voices in this blog post, my own accompanied by those of Dan MacKinlay (developer) and Renae Mason (producer). Dan reads a lot of overly fat economics and social theory books when not coding and travelling to Indonesia to play odd music in rice paddies; while Renae reads historical fiction, magical realism and design books when not producing and is about to go tango in Buenos Aires – hola!

We figured this blog post might be a warts and all look at the project to date. So grab a nice cup of herbal tea and sit back. There’s connections here to heavyweight developer stuff as well as more philosophical and practical issues relevant to Government 2.0 discussion as well.

So what exactly is About NSW?

Firstly it is the start of an encyclopaedia.

Our brief was never to create original content but to organise what already existed across a range of existing cultural institution websites. There’s some original content in there, but that probably is not be exciting in itself.

While projects like the wonderful Encyclopaedia of New Zealand, ‘Te Ara’ are fantastic, they cost rather more than our humble budget. Knowing up front that we had scant resources for producing ‘new’ content, we have tried to build a contextual discovery service that assists in exposing existing content online. We aimed to form partnerships with content providers to maximise the use of all those fact sheets, images and other information that is already circulating on the web. We figured, why duplicate efforts? In this way, we would like to grow About NSW as a trustworthy channel that delivers cultural materials to new audiences, sharing traffic and statistics with our partners along the way. That said, there’s actually a whole lot of exciting material lurking deep in the original content of the encyclopaedia, including a slew of digitised almanacs that we are yet to expose.

We’re particularly excited to be able to bring together and automatically connect content from various sources that otherwise wouldn’t be “getting out there”. There are a lot of websites that go around and scrape other sites for content – but really getting in there and make good use of their data under (reasonably) unrestrictive license is in facilitated by having the request come from inside government. It’s not all plain sailing, mind – if you look through our site you’ll see that a few partners were afraid to display the full content of their articles and have asked they be locked down.

But, because we work in aggregate, we can enrich source data with correlated material. A simple lucid article about a cultural figure can provide a nice centrepiece for an automatically generated mashup of related resources about said figure. We could go a lot further there in integrating third party content rather than going through the tedious process of creating our own articles by pulling in content from sources like Wikipedia and Freebase. (We certainly never intended to go into the encyclopaedia business!)

Secondly, the site is an explorer of the 2006 Australian Census data. As you might know, the Australian Bureau of Statistics does a rather excellent job of releasing statistical data under a Creative Commons license. What we have done is take this data and build a simple(r) way of navigating it by suburbs. We have also built a dynamic ‘choropleth’ map that allows easy visualising of patterns in a single data set. You can pin suburbs for comparison, and look for patterns across the State. (with extra special bells and whistles built for that by some folks from the Interaction Consortium who worked on the team.)

Third, we’ve started the long and arduous process of using the same map tools to build a cultural collections navigator that allows the discovery of records by suburb. This remains the most interesting part of the site but also the one most fraught by difficulties. For a start, very few institutions have well geo-located collections – certainly not with any consistency of precision. We have tried some tricky correlations to try to maximise the mappable content but things haven’t (yet) turned out the way we want them to.

But, considering the huge data sets we are dealing with we reckon we’ve done pretty well given the data quality issues and the problem of historical places not being able to be reverse geocoded automatically.

Fourth, not much of this would be much chop if we weren’t also trying to build a way of getting the data out in a usable form for others to work with. That isn’t yet available yet mainly because of the thicket of issues around rights and the continuing difficulty in convincing contributors that views of their content on our site can be as valuable, potentially more valuable when connected to other material, than views on their individual silo-ed sites.

Where is the data from?

About NSW has afforded a unique opportunity to work with other organisations that we don’t usually come into contact with and we’ve found that generosity and a willingness to share resources for the benefit of citizens is alive and well in many of our institutions. For example, we approached the The NSW Film & Television Office with a dilemma – most of the images that we can source from the libraries and archives are circa 1900, which is fantastic if you want to see what Sydney looked like back then, but not so great if you want to get a sense of what Sydney looks like today. They kindly came to the party with hundreds of high quality, contemporary images from their locations database which isn’t public facing but currently serves a key business role in attracting film and television productions to NSW.

Continuing along with our obsession for location specific data, we also approached the NSW Heritage Branch who completely dumb-founded us by providing us with not just some of their information on heritage places but the entire NSW State Heritage Register. The same gratitude is extended to the Art Gallery of NSW who filled in a huge gap on the collection map with their collection objects so now audiences can, for the first time, see what places our most beloved artworks are associated with (and sometimes, the wonderful links with heritage places – consider the relationship with the old gold-mining heritage town of Hill End and an on-going Artist in Residency program that is hosted there and has attracted artists such as Russell Drysdale and Brett Whitely). With our places well and truly starting to take shape we decided to add in demographic data with the most recent census from the Australian Bureau of Statistics who noted that their core role in providing raw data leaves them little time to for the presentation layer so they were delighted that we were interested in visualising their work.

Besides our focus on places, we are pretty keen on exploring more about the people who show up in our collection records and history books. To this end, the Australian Dictionary of Biography has allowed us to display extracts of all their articles that relate to people associated with NSW.

As a slight off-shoot to this project, we even worked with NSW Births Deaths and Marriages Registry to build the 100 Years of Baby Names at lives on the central NSW Government site, but that’s a different story, that’s already been blogged about here.

There are of course many other sources we’d like to explore in the future but for now we’ve opted for the low-hanging fruit and would like to thank our early collaborators for taking the leap of faith required and trusting us to re-publish content in a respectful manner.

There are many things we need to improve but what a great opportunity it has been to work on solving some of our common policy and legacy technology problems together.

Cultural challenges

Unfortunately, despite the rosy picture we are beginning to paint here, the other side is that collecting institutions are not accustomed to working across silos and are either not well-resourced to play in other domains.

Comments like “This isn’t our core business!” and “Sounds great but we don’t have time for this!” have been very common. Others have been downright resistant to the idea all together. The latter types prefer to operate a gated-estate that charges for entrance to all the wonders inside – the most explicit being “We don’t think you should be putting that kind of information on your site because everyone should come to us!”.

But we wonder, what’s more important – expert pruning? Or a communal garden that everyone can take pride in and improves over time?

To be fair, these are confronting ideas for institutions that have always been measured by their ‘authoritativeness’ and by the sheer numbers that can be attracted through their gates, and not the numbers who access their expertise.

Unsurprisingly these are the exact same issues being tackled by the Government 2.0 Taskforce.

It’s an unfortunately constructed competitive business and the worth of institutions online is still being measured on the basis of how many people interact with their content on their website. Once those interactions begin to take place elsewhere it becomes more difficult to report despite the fact that it is adding value – after all, how do you quantify that?

We’ve done some nifty initial work with the Google Analytics API to try to simplify data reporting back to contributors but it is more a philosophical objection more than anything.

Add to that Copyright and privacy and you have a recipe for trouble.

Munging data

Did we already say that this project hasn’t been without its problems?

The simplest summary is: web development in government has generally had little respect for the Tim Berners-Lee’s design principle of least power.

While sites abound with complicated Java mapping widgets, visually lush table-based designs and so on, there is almost no investment in pairing that with simple and convenient access to the underlying data in direct, simple, machine-readable way. This is particularly poignant for projects that have started out with high ideals, but have lost funding; all the meticulous work they have expended in creating their rich designs can go to waste if the site design only works in Netscape Navigator 4.

Making simple data sets available is timeless insurance against the shifting ephemeral fads of browser standards, and this season’s latest widget technology, but it’s something few have time for. That line of reasoning is particularly important for our own experimental pilot project. We have been lucky, unlike some of our partners, in that we have designed our site from the ground up to support easy data export. (You might well ask, though, if we can’t turn that functionality on for legal reasons, have we really made any real progress).

As everyone knows, pulling together datasets from different places is just a world of pain. But it is a problem that needs to be solved for any of the future things all of us want to do to get anywhere. Whilst we could insist on standards, what we wanted to experiment with here was how far we could get without mandating standards – because in the real world, especially with government data, a lot of data is collected for a single purpose and not considered for sharing and cross-mixing.

We’d love plain structured data in a recognised format, but it isn’t generally available. (RDF, OAI-PMH, ad hoc JSON over REST, KML – even undocumented XML with sensibly named elements will do) Instead, what there usually is are poorly marked up HTML sites, or databases full of inconsistent character encodings, that need to be scraped – or even data that we need to stitch together from several different sources to re-assemble the record in our partner’s database because their system won’t let them export it in one chunk. Elsewhere we’ve had nice Dublin Core available over OAI, but even once all the data is in, getting it to play nicely together is tricky, and parsing Dublin-core’s free-text fields has still been problematic.

In our standards-free world, there is also the problem of talking back.

Often we’re faced with the dilemma that we believe that we have in some way value-added to the data we have been given – but we have no way of easily communicating that back to its source.

Maybe we’ve found inconsistencies and errors in the data we have imported, or given “blobs” of data more structure, or our proofreaders have picked up some spelling mistakes. We can’t automatically encode our data back into the various crazy formats it comes in, (well, that it’s twice as much work!), and even do we invest the time on that if there is no agreed way of communicating suggested changes? Or what if the partner in question has lost funding and doesn’t have time to incorporate updates no matter how we provide them?

This is a tricky problem without an easy solution.

What does it run on?

Behind the scenes the site is built pretty much with open spource choices. It was built using on Python using the Django framework, and PostgresQL’s geographic extension postGIS (the combination known as Geodjango).

For the interactive mapping it uses Modest Maps – which allows us to change between tile providers as needed – and everything is pretty modular and re-purposable, and a whole bunch of custom file-system based tile-metadata service code.

Since we have data coming from lots of different providers with very different sets of fields, we store data internally in a general format which can handle arbitrary data – the EAV pattern – although we get more mileage out of our version because of Django’s sophisticated support for data model subclassing.

We have also used Reuters’ Open Calais to cross-map and relate articles initially whilst a bunch of geocoders go to work making sense of some pretty rough geo-data.

We use both the State Government supplied geocoder from the New South Wales government’s Spatial Information Exchange, and Google’s geocoder to fill the gaps.

And we use the Google Analytics, plus the Google Analytics Data Export API to be able to deliver contributor-specific usage data.

We use an extensive list of open-source libraries to make all this work, many of which we have committed patches to along the way.

We do our data parsing with

  • phpserialize for python for rolling quick APIs with out PHP-using friends
  • PyPdf for reading PDFs
  • pyparsingfor parsing specialised formats (e.g. broken “CSV”)
  • Beautiful Soup for page scraping
  • lxml for XML document handling
  • suds for SOAP APIs (and it is absolutely the best, easiest and most reliable python SOAP client out there

Our search index is based off whoosh, with extensive bug fixes by our friendly neighbourhood search guru Andy

We’ve also created some of our own which have been mentioned here before:

  • python-html-sanitizer takes our partners’ horrifically broken or embedded-style-riddled html and makes it something respectable. (based off the excellent html5lib as well as Beautiful Soup)
  • django-ticket is a lightweight DB-backed ticket queue optimised for efficient handling of resource-intensive tasks, like semantic analysis.

—-

So, go an have a play.

We know there are still a few things that don’t quite work but we figure that your eyes might see things different to us. We’re implementing a bunch of UI fixes in the next fortnight too so you might want to check back in a fortnight and see what has improved. Things move fast on the web,

Categories
Tools

Moving out in to the cloud – reverse proxying our website

For a fair while we’ve been thinking about how we can improve our web hosting. At the Powerhouse we host everything in-house and our IT team does a great job of keeping things up and running. However as traffic to our websites has grown exponentially along with an explosion in the volume of data we make available, scalability has become a huge issue.

So when I came back from Museums and the Web in April I dropped Rob Stein, Charles Moad and Edward Bachta’s paper on how the Indianapolis Museum of Art was using Amazon Web Services (AWS) to run Art Babble on Dan, our IT manager’s desk.

A few months ago a new staff member started in IT – Chris Bell. Chris had a background in running commercial web hosting services and his knowledge and skills in the area have been invaluable. In a few short months our hosting set up has been overhauled. With a move to virtualisation inside the Museum as a whole, Chris started working with one of our developers, Luke, thinking about how we might try AWS ourselves.

Today we started our trial of AWS beginning with the content in the Hedda Morrison microsite. Now when you visit that site all the image content, including the zoomable images, are served from AWS.

We’re keeping an eye on how that goes and then will switch over the entirety of our OPAC.

I asked Chris to explain how it works and what is going on – the solution he has implemented is elegantly simple.

Q: How have you changed our web-hosting arrangements so that we make use of Amazon Web Services?

We haven’t changed anything actually. The priorities in this case were to reduce load on our existing infrastructure and improve performance without re-inventing our current model. That’s why we decided on a system that would achieve our goals of outsourcing the hosting of a massive number of files (several million) without ever actually having to upload them to a third-party service. We went with Amazon Web Services (AWS) because it offers an exciting opportunity to deliver content from a growing number of geographical points that will suit our users. [Our web traffic over the last three months has been split 47% Oceania, 24% North America, 21% Europe]

Our current web servers deliver a massive volume and diversity of content. By identifying areas where we could out-source this content delivery to external servers we both reduce demand on our equipment – increasing performance – and reduce load on our connection.

The Museum does not currently have a connection intended for high-end hosting applications (despite the demand we receive), so moving content out of the network promises to not only deliver better performance for our website users but also for other applications within our corporate network.

Q: Reverse-proxy? Can you explain that for the non-technical? What problem does it solve?

We went with Squid, which is a cache server. Squid is basically a proxy server, usually used to cache inbound Internet traffic and spy on your employees or customers – but also optimise traffic-flow. For instance, if one user within your network accesses a web page from a popular web site, it’s retained for the next user so that it needn’t be downloaded again. That’s called caching – it saves traffic and improves performance.

Squid is a proven, open-source and robust platform, which in this case allows us to do this in reverse – a reverse-proxy. When users access specified content on our web site, if a copy already exists in the cache it is downloaded from Amazon Web Services instead of from our own network, which has limited bandwith that is more appropriately allocated to internal applications such as security monitoring, WAN applications and – naturally – in-house YouTube users (you know who you are!).

Q: What parts of AWS are you using?

At this stage we’re using a combination. S3 (Simple Storage) is where we store our virtual machine images – that’s the back-end stuff, where we build virtual machines and create AMIs (Amazon Machine Images) to fire up the virtual server that does the hard work. We’re using EC2 (Elastic Cloud Compute) to load these virtual machines into running processes that implement the solution.

Within EC2 we also use Elastic IPs to forward services to our virtual machines, which in the first instance are web servers and our proxy server, but also allows us to enforce security protocols and implement management tools for assessing the performance of our cache server, such as SNMP monitoring. We also use EBS (Elastic Block Store) to create virtual hard drives which maintain the cache, can be backed up to S3 and can be re-attached to a running instance should we ever need to re-configure the virtual machine. All critical data, including logs, are maintained on EBS.

We’re also about to implement a solution for another project called About NSW where we will be outsourcing high bandwidth static content (roughly 17GB of digitised archives in PDFs) to Amazon CloudFront.

Q: If an image is updated on the Powerhouse site how does AWS know to also update?

It happens transparently, and that’s the beauty of the design of the solution.

We have several million files that we’re trying to distribute and are virtually unmanageable in a normal Windows environment. trying to push this content to the cloud would be a nightmare. By using the reverse proxy method we effectively pick and choose – and thereby pull the most popular content and it automatically gets copied to the cloud for re-use.

Amazon have recently announced an import/export service, which would effectively allow us to send them a physical hard-drive of content to upload to a storage unit that they call a “bucket”. However, this is still not a viable solution for us because it’s not available in Australia and our content keeps getting added to – every day. By using a reverse proxy we effectively ensure that the first time that content is accessed it becomes rapidly available to any future users. And we can still do our work locally.

Q: How scalable is this solution? Can we apply it to the whole site?

I think it would be undesirable to apply it to dynamic content in particular, so no – things such as blogs which get changed frequently or search results which are always going to be slightly different depending on the changes that are effected to the underlying databases at the back end. In any case, once the entire site is fed via a virtual machine in another country you’ll actually experience a reduction in performance.

The solution we’ve implemented is aimed at re-distributing traffic in order to improve performance. It is an experiment, and the measurement techniques that we’ve implemented will gauge its effectiveness over the next few months. We’re trying to improve performance and save money, and we can only measure that through statistics, lies and invoices.

We’ll report back shortly once we know how it goes, but go on – take a look at the site we’ve got running in the cloud. Can you notice the difference?

Categories
Interviews

Dan Collins on our move to virtualisation

Our IT manager, Dan Collins, is in the Australian broadsheet today talking about our move over the last year to virtualisation of our servers.

“We have got a much-reduced infrastructure spend in terms of the replacement cycle of hardware,” Mr Collins said. “When you look at what it saved us having to replace over three years, I would say that is about $200,000 worth of equipment.”

There were additional savings on labour costs for maintaining the equipment, along with reduced service calls, he said.

“We have gone down now to three host servers, a massive change from 35, and that has obviously had effects on power and cooling in our server room. It is much quieter than before.”

The museum has cut its technology power costs by 33 per cent . . .

Categories
Web metrics

How much is your website worth?

I’ve noticed that I’ve been tweeting a lot of links rather than blogging them as I used to. And from time to time there are some links that need to be blogged to get to those who miss the tweets or don’t follow.

Here’s one from the Web & Information Team at Lincolnshire County Council in the UK titled ‘Let’s Turn Off The Web‘.

In order to try to calculate how much the local council website is worth, they turn the question around and ask how much it would cost to provide the same services and level of interaction with citizens if they didn’t have a website.

I like this way of thinking as it provides a way of demonstrating the value of your online services to those who see them only as a ‘cost’. (Your organisation probably already thinks in this way when it is trying to calculate the value of a marketing and PR but web units rarely do.)

So discussions of cost per user, as in a recent Freedom of Information Request to many councils, missed the point. It’s not just about cost per user. It’s about value to the user and savings to the council.

If we turned off our web services:

177,000 visitors per month (May 2009 figures) to our web site would find no web site.

If only 10% of these visitors were to contact us by phone – say 17,000 – then we would incur an extra cost of approx £51,000 per month.* Based on Socitm’s costs of phone contact

Obviously a whole lot of things couldn’t be done at all, but I was particularly drawn to these figures quoted by Lincolnshire Council from work by SOCITM called Channel Value Benchmarking:

*The costs of customer contact are…

Face to face £6.56.
Phone £3.22
Web 27p.
(These figures provided by Socitm 2008.)

Suddenly your web unit is looking pretty good value for money.

Categories
Conceptual open content

Some clarifications on our experience with ‘free’ content

Over on the Gov2 blog a comment was posted that asked for more information about our experience at the Powerhouse with ‘giving away content’ for free.

I’d be interested to know more about your experience with Flickr and your resulting sales increase. Are these print sales or licensing sales? And are they sales, through your in-house service, of the identical images you have on Flickr, or are you using a set of images on Flickr as a ‘teaser’ to a premium set of images you hold in reserve? How open is this open access? I am trying to understand the mindset of users in an open access environment who will migrate from ‘free use’ to ‘pay-for use’ for identical content, as this makes no sense, either commercially or psychologically, unless there is additional service or other value-add.

Whilst I communicated privately a lengthy response I think some of it is valuable to post here to clarify and build upon the initial findings published by my colleague Paula Bray earlier this year.

Here’s what I wrote. Some of this will be familiar to regular readers, some of it is new.

(Please also bear in mind that I am focussing here on predominantly economic/cost-related issues. Regular readers will know that our involvement in the Commons on Flickr has been largely driven by community and mission-related reasons – don’t take this post as a rebuff of those primary aims)

First a couple of things that are crucial for understanding the nuances of our situation (and how it differs, say from that of other institutions, galleries, museums)

  1. The Powerhouse is, more or less, a science museum in its ‘style’ (although not by our collection). Our exhibitions have traditionally, since our re-launch/re-naming in 1988, been heavy on ‘interactivity’ (in an 80s kind of way), and ‘hands on’. We aren’t known for our photographic or image collections and we haven’t done pure photographic exhibitions (at least for the last 15 years).
  2. Consequently we have a small income target for image sales. This target doesn’t even attempt to cover the salaries of the two staff in our Photo Library.
  3. In 2007/8 around 72% of our income was from State government funding.
  4. The Powerhouse has an entry charge for adults, and children aged 4 and over. In 2007/8 this made up 65% of the remainder of our income. Museum membership (which entitles free entry) added a further 8%.

(You can find these figures in our annual reports)

So what have we found by releasing images into the Commons on Flickr?

Firstly we’ve been able to connect with the community that inhabits Flickr to help us better document and locate the images that we have put there. This has revealed to us a huge amount about the images in our collection – especially as these images weren’t particularly well documented in the first place. This has incurred a resource cost to us of course in terms of sifting responses and then fact checking by curatorial staff. But this resource cost is outweighed by the value of the information we are getting back from the community.

Secondly we’ve been able to reach much wider audiences and better deliver on our mission. The first 4 weeks of these images being in Flickr eclipsed an entire year’s worth of views coming from the same images on our own website. Our images were already readily Google-able and were also available through Picture Australia which is the National Library of Australia’s federated image and picture search.

(I’ve written about this quite a bit on the blog previously.)

Thirdly, we’ve found that as very few people knew we had these images in the first place, we’ve been able to grow the size of the market for them whilst simultaneously reducing the costs of supplying images.

How has this ‘reduced the costs’?

What Flickr has done is reduce the internal cost of delivering these images to “low economic/high mission value” clients such as teachers, school kids and private citizens. Rather than come through us to request ‘permission’ these clients can now directly download a 1024px version for use in their school projects or private work. The reduction in staff time and resource as a result of this is not to be underestimated, nor is the increased ease o use for clients.

At the same time, Flickr’s reach has opened up new “high economic/low mission value” client groups. Here I am talking about commercial publishers, broadcasters, and businesses. Commercial publishers and publishers want a specific resolution, crop or format and we can now charge for the supply in these formats. At the same time we are finding that we are now getting orders and requests from businesses that had never considered us as a source of such material. We are actively expanding our capacity to deliver art prints to meet the growing needs of businesses as a result.

It is about relationships and mission!

At the same time, we can now build other relationships with those clients – rather than seeing them only in the context of image sales. This might be through physical visitation, corporate venue hire, membership, or donations.

Likewise, we know that the exposure of our public domain images is leading to significant offers of other photographic collections to the Museum alongside other commercial opportunities around digitisation and preservation services. Notably we have also been trying to collapse and flatten the organisation so that business units and silos aren’t in negative competition internally – so we can actually see a 360 degree view of a visitor/patron/consumer/citizen.

Categories
Conferences and event reports

Upcoming talks, workshops and presentations

I’ve got a bunch of sector talks, workshops and presentations coming up over the next few months. I’ll be talking about some brand new (and right now, top secret) projects that focus on ‘linked data’, maps and the ‘Papernet’, as well as delving deeper into metrics, ‘value’ and digital strategy.

So you just missed me at Glam-Wiki at the Australia War Memorial in Canberra but I’ll be giving a whole day long seminar titled Social Collections, New Metrics, Maps and Other Australian Oddities at San Francisco Museum of Modern Art on August 27. I’m really excited to be catching up with everyone on the West Coast and exchanging new ideas and strategies. This is a free seminar presented by the Wallace Foundation, The San Francisco Foundation, Grants for the Arts/The San Francisco Hotel Tax Fund, Theatre Bay Area and Dancers’ Group. It is also part of the National Arts Marketing Project (NAMP), a program of Americans for the Arts that is sponsored nationally by American Express.

Then I’ll be giving a presentation focussing on ‘The Social Collection’ at Raise Your Voice: the Fourth National Public Galleries Summit, September 9- 11 in Townsville. I’m looking forward to hearing Virginia Tandy from Manchester City Council and the NGV’s Lisa Sassella is running a masterclass on audience segmentation and psychographics which looks fascinating. And of course, artist Craig Walsh is speaking as well and, well, we’ve been working on a little something.

There’s even rumours that there might be a reprise of my UK workshops of last September run jointly by Culture24 and Collections Trust sometime in early November – but right now, UK readers, that is still just a rumour.

After that I’m at the New Zealand National Digital Forum in Wellington, NZ on November 23-24 where I’m presenting and facilitating sessions around locative cultural projects. I’m excited about NDF because it is always full of inspirational Kiwi initiatives and a couple of well chosen international speakers – this year the inimitable Nina Simon, and Daniel Incandela from the Indianapolis Museum of Art. No doubt there’ll be some exciting new initiative from Digital NZ announced at NDF – just because they can.

Categories
Young people & museums

Odditoreum visitor-written-labels now on Flickr

Thanks to encouragement from Shelley Bernstein at the Brooklyn, Paula Bray has started uploading photos of some of the ‘visitor-generated labels‘ from our Odditoreum mini-exhibition.

The ‘write-your-own-labels’ continue to be a roaring success.

More on the Odditoreum here and on the basic info page.

Categories
Conferences and event reports open content Wikis

Some thoughts: post #GLAM-WIKI 2009

Untitled-1

Photography by Paula Bray
License: Creative Commons Attribution-Noncommercial-No Derivative Works 2.0

(Post by Paula Bray)

Seb and I have just spent two days at a conference, in the nation’s rather chilly capital that involved a bunch of Wikimedians (wonder what that would be called) and members from the GLAM (Galleries, Libraries and Museum sector) sector. This event was touted as a two-way dialogue to see how the two sectors could work more closely together for “the achievement of better online public access to cultural heritage”.

So what do we do post conference?

GLAM-WIKI was a really interesting conference to be a part of even if some of us were questioning ‘why’ are we here. Some of the tweets on Twitter said that there is a need for some concise decisions instead of summary. I am not sure at this stage if there are complete answers and concise decisions will need to be made by us, the GLAMs.

Jennifer Riggs, Chief Program Officer at the Wikimedia Foundation summed it up quite well and asked the question “what is one thing you will do when you leave this conference?” I think this is exactly the type of action that can lead to bigger change. Perhaps it is a presentation to other staff members in your organisation, a review of your licensing polices and business models, a suggestion of better access to your content in your KPI’s or start a page on Wikimedia about what you do and have in your collections.

One of the disturbing things for me came from Delia Browne, National Copyright Director at the Ministerial Council for Employment, Education, Training and Youth Affairs. Browne highlighted the rising costs the education sector is paying to copy assets including content from our own institutions. Delia stated that there is a 720% increase in statutory licensing costs and the more content that goes online the more this cost will increase. Now the GLAM sector can help here by rethinking its licensing options and look towards a Creative Commons license for content they may own the rights to, including things like teachers’ notes. Teachers can do so much with our content but they need to know what they can use. She raised the question “What sort of relationships do we want with the education sector”? The education sector will be producing more and more content for itself and this will enter into our sector. We don’t want to be competing but rather complimenting each other. Schools make up 60% of CAL’s (Copyright Agency Limited) revenue. What will this figure be when the Connected Classrooms initiative is well and truly operational in the “digital deluge” a term mentione by Senator Kate Lundy.

Lundy gave the keynote presentation titled Finding Common Ground. She brought up many important issues in her presentation including the rather awkward one around access to material that is already in the public domain. Lundy:

“These assets are already in the public domain, so concepts of ‘protection’ that inhibit or limit access are inappropriate. In fact, the motivation of Australia’s treasure house institutions is or should be, to allow their collections to be shared and experienced by as many people as possible .”

Sharing, in turn, leads to education, research and innovation. This is something that we have experienced with our images in the Commons on Flickr and we only have 1200 images in our photostream.

The highlight for me was the question she says we should be addressing “why are we digitising in the first place?”.

This is a really important statement and should be asked at the beginning of every digitisation project. The public needs fast access to content that it trusts and our models are not going to be able to cope with the need for fast dissemination of our digital content in the future if we don’t make it accessible. It costs so much to digitise our collections – so surely we need to ask this question first and foremost. Preservation is not enough anymore. There are too many hoops to go through to get content and we are not fast enough. “The digital doors must be opened” and this is clearly demonstrated with the great initiative Australian Newspapers Digitisation Program presented by Rose Holley of the National Library of Australia.

However as Lundy said during the panel discussion following her presentation was “goodwill will have to bust out all over”. There is a lot of middle ground that the GLAM sector needs to address in relation to policy around its access initiatives and digital strategies and, yes, I think policy does matter. If we can get this right then the doors can be opened and the staff in organisations can work towards the KPI’s, missions and aims of unlocking our content and making it publicly available.

Perhaps your one thing, post GLAM-WIKI conference, could be to comment on the Government 2.0 Taskforce Issues Paper and ensure that all the talk of Government 2.0 clearly includes reference to the Government-funded GLAM sector.

Categories
Social media Web metrics

Virtuous circle – from visitor to speaker

This short post is for everyone who naively asks about the “ROI of social media” and whether “websites can be proven to result in museum visitation”.

Two years ago Bob Meade wasn’t a regular visitor to the Museum (despite being directly in one of our “target demographics”) let alone a user of our website.

Then we released a bunch of photographs to the Commons on Flickr. These peaked Bob’s interest and reminded him that the Museum existed in his very own home town. (You can read more about that in an interview with Bob from last year – part one, part two.)

Now he’s speaking at one of our weekend talks!

Bob is blogging the prospective content (and museum favourites) of his talk over at his own blog.

It is important to understand that this wasn’t the result of a (social media) “marketing strategy” – it was the result of making valuable museum content broadly available and then engaging our communities in honest, personal conversations.

If you are in Sydney, then come along and hear him speak on September 6.