Categories
Developer tools User experience

Playing with Google’s reading age tool

Google just released a new ‘reading level filter‘ in the Advanced Search section of their search – the part that probably only librarians actually regularly use.

I’ve run it on a few of our domains with interesting results.

Here’s our main powerhousemuseum.com.

After seeing that I went off and ran it over a slew of other museums to see if I could spot any patterns. It seems that natural history museums have the highest proportion of ‘advanced’ whilst art museums bias towards the ‘intermediate’.

I also tried on of the new ‘events calendar’ sites we’ve been involved in building (behind the scenes post coming soon). Being a calendar site aimed at parents looking for holidays activities we want to make sure that it has the broadest possible appeal. Fortunately we seem to do rather well – 100% Basic!

I’m not sure how valuable this really is in the long run but it is another tool to experiment with. There’s been some fun analysis of different news (and other) sites using the tool over at Virtual Economics.

Categories
User behaviour Web metrics

A/B headline switching for museum content

Regular readers will know that I’ve been fascinated by the overlap between museum curatorial practice and journalism over the past while. Similarly I’ve also been very interested in the impact of behavioural data on these professions that is emerging at scale and in real-time on digital platforms.

So I was very excited to find that out of a Baltimore Hackcamp that a ‘headline tester’ plugin for WordPress has been released.

You will have noticed how the headlines on news websites change throughout the day for the same article. This has been the subject of several online projects like The Quick Brown that tracked changes in Fox News headlines, and News Sniffer that tracks full article edits in the UK.

This sort of A/B testing is usually the kind of activity that takes a lot of work, planning, and is hard to deploy at a daily level with the kind of resources that museums have available to them. In news journalism time is of the essence – readership fluctuations directly impact commercial model in a highly competitive environment – so it makes a lot of sense to have systems in place for journalists to track and edit their stories as they go. Museums don’t face these pressures but do face the same competition for attention.

What this plugin allows us to do is – like a news website – pose two different headlines for the same blog post, then, over time, the one that generates the most clicks becomes the one that sticks for that post. Visitors and readers effectively vote through their actions for the ‘best’ title.

We’ve just started to deploy this on the Photo of the Day blog and it will progressively roll out over the others as we go.

Today’s Photo of the Day post introduces a camera from our collection. So which out of these two headlines do you think would generate the most traffic?

Are you interested in hearing about our camera collection?
or
The Bessa 66 folding camera

Paula Bray who wrote the post expected the first headline would be most popular. And now we can test that hypothesis!

Surprisingly, right now it is the second more direct headline – ‘The Bessa 66 folding camera‘ – that is generating the most traffic by almost 2 to 1.

Over time we will be able to better refine our headlines that are written by curators and other staff who blog. And of course this feeds back into improving the effectiveness of the writing style of museum in these digital mediums.

Categories
API Interviews

Quick interview with Amped Powerhouse API winners – Andrea Lau & Jack Zhao

Andrea Lau & Jack Zhao were the winners of the Powerhouse Museum challenge at the recent Amped hack day organised by Web Directions in Sydney.

As part of their prize they won a basement tour to see all the things that the Powerhouse doesn’t have out on display. Renae Mason, senior online producer at the Museum, bailed them up for a quick Q&A in the noisy confines of the basement.

Apologies for the noisy audio! Museum storage facilities can be surprisingly loud places!

Categories
User behaviour Web metrics

Testing an engagement metric and finding surprising results

As regular readers know I’ve been working on web metrics for a few years now and experimenting with different models for cultural institutions. So it was with interest I read the Philly.com’s equation for online engagement over at Nieman Journalism Lab.

… two months ago, philly.com, home of the Philadelphia Inquirer and Daily News, began analyzing their web traffic with an “engagement index” — an equation that goes beyond pageviews and into the factors that differentiate a loyal, dedicated reader from a fly-by. It sums up seven different ways that users can show “engagement” with the site, and it looks like this: Σ(Ci + Di + Ri + Li + Bi + Ii + Pi)

[…snip…]

One possibility they considered was measuring engagement simply through how many visitors left comments or shared philly.com content on a social media platform. But that method “would lose a lot of people,” Meares said. “A lot of our users don’t comment or share stories, but we have people — 45 percent — [who] come back more than once a day, and those people are very engaged.”

They ultimately decided on seven categories, each with a particular cutoff:

Ci — Click Index: visits must have at least 6 pageviews, not counting photo galleries
Di — Duration Index: visits must have spend a minimum of 5 minutes on the site
Ri — Recency Index: visits that return daily
Li — Loyalty Index: visits that either are registered at the site or visit it at least three times a week
Bi — Brand Index: visits that come directly to the site by either bookmark or directly typing www.philly.com or come through search engines with keywords like “philly.com” or “inquirer”
Ii — Interaction Index: visits that interact with the site via commenting, forums, etc.
Pi — Participation Index: visits that participate on the site via sharing, uploading pics, stories, videos, etc.

Philly’s equation draws heavily on Eric T. Peterson and Joseph Carrabis’ “Measuring the Unmeasurable: Visitor Engagement” (pdf) .

I started thinking about how to apply this equation to the Powerhouse’s web metrics.

Click (6 pages or more) and Duration (5 minutes or more) indexes are fine. However Recency set at daily visitation is simply not achievable for museums – especially where through the door museum visitors are likely to average out at around once a year – and our online content is never going to be as responsive as ‘news’ has to be. So in thinking about Recency I settled on a 90 day figure.

Here’s an eight quarter look at how we’ve been tracking against a variant of this metric – downplaying the interaction and participation indexes as our content type and site doesn’t work evenly for these.

I’ve added a column for Sydney-only visitors so you can get a sense of how geographically specific this engagement metrics is for a museum such as ours.

Philly-style High Value % Philly-style High Value Sydney %
Q3 2010 3.73% 8.10%
Q2 2010 3.20% 7.78%
Q1 2010 2.38% 7.69%
Q4 2009 1.60% 5.56%
Q3 2009 1.73% 5.14%
Q2 2009 1.75% 5.67%
Q1 2009 2.12% 7.24%
Q4 2008 1.45% 4.59%


Taking a closer look at Q3 2010 and the Sydney Philly-style high-value segment there are some interesting data.

This apparently highly-engaged segment that comprises 8.10% of all Sydney traffic to the Powerhouse website for the period. 71.25% of this segment are new visitors to the Powerhouse, looking at a remarkable average of 17.3 pages per visit and spending and average of 19:44 minutes on the site up until the final page of their visit. These are clearly a highly desirable group of web visitors.

So what do they do?

Interestingly it turns out that these are primarily what we used to call ‘traditional education visitors’. I’ve written about them before in my paper for Museums & the Web earlier in the year.

31.47% visit Australian Designers at Work, a resource built and last modified in 2004
15.45% visit Australia Innovates, a curriculum resource built in 2001
7.58% visit exhibition promotional pages
7.54% visit the online collection

Perhaps unsurprisingly for such committed, but traditional, web visitors, they also accounted for 50% of the online membership purchases during the period.

Categories
Exhibition technology Interactive Media User behaviour Young people & museums

The honeypot effect: more on WaterWorx, the Powerhouse Museum’s iPad interactive

Photography by Geoff Friend, Powerhouse Museum. CC-BY-NC-ND

Week one of our iPad interactive – WaterWorx – and the feedback has been great from visitors and teachers alike.

Just to prove how much of a honeypot the iPads are, here’s a time-lapse from the day that the exhibition was soft launched. You can see the early morning final touches being added to the space, followed by the flurry of the first school visitors, and so on.

You can see for yourself the significant dwell times and people coming back for another go. And that’s awesome.

We’ve been deploying minor fixes as we go and the OtterBox Defender cases that we have been adapted to protect the iPads are being pushed to their limits!

(If you missed our first post that describes the game itself then you need to travel back in time a few days)

Categories
API Collection databases Conceptual Interviews Metadata

Making use of the Powerhouse Museum API – interview with Jeremy Ottevanger

As part of a series of ‘things people do with APIs’ here is an interview I conducted with Jeremy Ottevanger from the Imperial War Museum in London. Jeremy was one of the first people to sign up for an API key for the Powerhouse Museum API – even though he was on the other side of the world.

He plugged the Powerhouse collection into a project he’s been doing in his spare time called Mashificator which combines several other cultural heritage APis.

Over to Jeremy.

Q – What is Mashificator?

It’s an experiment that got out of hand. More specifically, it’s a script that takes a bit of content and pulls back “cultural” goodies from museums and the like. It does this by using a content analysis service to categorise the original text or pull out some key words, and then using some of these as search terms to query one of a number of cultural heritage APIs. The idea is to offer something interesting and in some way contextually relevant – although whether it’s really relevant or very tangential varies a lot! I rather like the serendipitous nature of some of the stuff you get back but it depends very much on the content that’s analysed and the quirks of each cultural heritage API.

There are various outputs but my first ideas were around a bookmarklet, which I thought would be fun, and I still really like that way of using it. You could also embed it in a blog, where it will show you some content that is somehow related to the post. There’s a WordPress plugin from OpenCalais that seems to do something like this: it tags and categorises your post and pulls in images from Flickr, apparently. I should give it a go! Zemanta and Adaptive Blue also do widgets, browser extensions and so on that offer contextually relevant suggestions (which tend to be e-commerce related) but I’d never seen anything doing it with museum collections. It seemed an obvious mashup, and it evolved as I realised that it’s a good way to test-bed lots of different APIs.

What I like about the bookmarklet is that you can take it wherever you go, so whatever site you’re looking at that has content that intrigues you, you can select a bit of a page, click the bookmarklet and see what the Mashificator churns out.

Mashificator uses a couple of analysis/enrichment APIs at the moment (Zemanta and Yahoo! Terms Extractor) and several CH APIs (including the Powerhouse Museum of course!) One could go on and on but I’m not sure it’s worth while: at some point, if this is helpful to anyone, it will be done a whole lot better. It’s tempting to try to put a contextually relevant Wolfram Alpha into an overlay, but that’s not really my job, so although it would be quite trivial to do geographical entity extraction and show amap of the results, for example, it’s going too far beyond what I meant to do in the first place so I might draw the line there. On the other hand, if the telly sucks on Saturday night, as it usually does, I may just do it anyway.

Beside the bookmarklet, my favourite aspect is that I can rapidly see the characteristics of the enrichment and content web services.

Q – Why did you build it?

I built it because I’m involved with the Europeana project, and for the past few years I’ve been banging the drum for an API there. When they had an alpha API ready for testing this summer they asked people like me to come up with some pilots to show off at the Open Culture conference in October. I was a bit late with mine, but since I’d built up some momentum with it I thought I may as well see if people liked the idea. So here you go…

There’s another reason, actually, which is that since May (when I started at the Imperial War Museum) it’s been all planning and no programming so I was up for keeping my hand in a bit. Plus I’ve done very little PHP and jQuery in the past, so this project has given me a focussed intro to both. We’ll shortly be starting serious build work on our new Drupal-based websites so I need all the practice I can get! I still no PHP guru but at least I know how to make an array now…

Q – Most big institutions have had data feeds – OAI etc – for a long time now, so why do you think APIs are needed?

Aggregation (OAI-PMH‘s raison d’etre) is great, and in many ways I prefer to see things in one place – Europeana is an example. For me as a user it means one search rather than many, similarly for me as a developer. Individual institutions offering separate OPACs and APIs doesn’t solve that problem, it just makes life complicated for human or machine users (ungrateful, aren’t I?).

But aggregation has its disadvantages too: data is resolved to the lowest common denominator (though this is not inevitable in theory); there’s the political challenge of getting institutions to give up some control over “their” IP; the loss of context as links to other content and data assets are reduced. I guess OAI doesn’t just mean aggregation: it’s a way for developers to get hold of datasets directly too. But for hobbyists and for quick development, having the entirety of a dataset (or having to set up an OAI harvester) is not nearly as useful or viable as having a simple REST service to programme against, which handles all the logic and the heavy lifting. And conversely for those cases where the data is aggregated, that doesn’t necessarily mean there’ll be an API to the aggregation itself.

For institutions, having your own API enables you to offer more to the developer community than if you just hand over your collections data to an aggregator. You can include the sort of data an aggregator couldn’t handle. You can offer the methods that you want as well as the regular “search” and “record” interfaces, maybe “show related exhibitions” or “relate two items” (I really, really want to see someone do this!) You can enrich it with the context you see fit – take Dan Pett’s web service for the Portable Antiquities Scheme in the UK, where all the enrichment he’s done with various third party services feeds back into the API. Whether it’s worthwhile doing these things just for the sake of third party developers is an open question, but really an API is just good architecture anyway, and if you build what serve’s your needs it shouldn’t cost that much to offer it to other developers too – financially, at least. Politically, it may be a different story.

Q – You have spent the past while working in various museums. Seeing things from the inside, do you think we are nearing a tipping point for museum content sharing and syndication?

I am an inveterate optimist, for better or worse – that’s why I got involved with Europeana despite a degree of scepticism from more seasoned heads whose judgement I respect. As that optimist I would say yes, a tipping point is near, though I’m not yet clear whether it will be at the level of individual organisations or through massive aggregations. More and more stuff is ending up in the latter, and that includes content from small museums. For these guys, the technical barriers are sometimes high but even they are overshadowed by the “what’s the point?” barriers. And frankly, what is the point for a little museum? Even the national museum behemoths struggle to encourage many developers to build with their stuff, though there are honourable exceptions and it’s early days still – the point is that the difficulty a small museum might have in setting up an API is unlikely to be rewarded with lots of developers making them free iPhone apps. But through an aggregator they can get it in with the price.

One of my big hopes for Europeana was that it would give little organisations a path to get their collections online for the first time.
Unfortunately it’s not going to do that – they will still have to have their stuff online somewhere else first – but nevertheless it does give them easy access both to audiences and (through the API) to third party developers that otherwise would pay them no attention. The other thing that CHIN, Collections Australia, Digital NZ, Europeana and the like do, is offer someone big enough for Google and the link to talk to. Perhaps this in itself will end up with us settling on some de facto standards for machine-readable data so we can play in that pool and see our stuff more widely distributed.

As for individual museums, we are certainly seeing more and more APIs appearing, which is fantastic. Barriers are lowering, there’s arguably some convergence or some patterns emerging for how to “do” APIs, we’re seeing bold moves in licensing (the boldest of which will always be in advance of what aggregators can manage) and the more it happens the more it seems like normal behaviour that will hopefully give others the confidence to follow suit. I think as ever it’s a matter of doing things in a way that makes each little step have a payoff. There are gaps in the data and services out there that make it tricky to stitch together lots of the things people would like to do with CH content at the moment – for example, a paucity of easy and free to use web services for authority records, few CH thesuari, no historical gazetteers. As those gaps get filled in the use of museum APIs will gather pace.

Ever the optimist…

Q – What is needed to take ‘hobby prototypes’ like Mashificator to the next level? How can the cultural sector help this process?

Well in the case of the Mashificator, I don’t plan a next level. If anyone finds it useful I suggest they ask me for the code or do it themselves – in a couple of days most geeks would have something way better than this. It’s on my free hosting and API rate limits wouldn’t support it if it ever became popular, so it’s probably only ever going to live in my own browser toolbar and maybe my own super-low-traffic blog! But in that answer you have a couple things that we as a sector could do: firstly, make sure our rate limits are high enough to support popular applications, which may need to make several API calls per page request; secondly, it would be great to have a sandbox that a community of CH data devotees could gather around/play in. And thirdly, in our community we can spread the word and learn lessons from any mashups that are made. I think actually that we do a pretty good job of this with mailing lists, blogs, conferences and so on.

As I said before, one thing I really found interesting with this experiment was how it let me quickly compare the APIs I used. From the development point of view some were simpler than others, but some had lovely subtleties that weren’t really used by the Mashificator. At the content end, it’s plain that the V&A has lovely images and I think their crowd-sourcing has played its part there, but on the other hand if your search term is treated as a set of keywords rather than a phrase you may get unexpected results… YTE and Zemanta each have their own characters, too, which quickly become apparent through this. So that test-bed thing is really quite a nice side benefit.

Q – Are you tracking use of Mashificator? If so, how and why? Is this important?

Yes I am, with Google Analytics, just to see if anyone’s using it, and if when they come to the site they do more than just look at the pages of guff I wrote – do they actually use the bookmarklet? The answer is generally no, though there have been a few people giving it a bit of a work-out. Not much sign of people making custom bookmarklets though, so that perhaps wasn’t worthwhile! Hey, lessons learnt.

Q – I know you, like me, like interesting music. What is your favourite new music to code-by?

Damn right, nothing works without music! (at least, not me.) For working, I like to tune into WFMU, often catching up on archive shows by Irene Trudel, Brian Turner & various others. That gives me a steady stream of quality music familiar and new. As for recent discoveries I’ve been playing a lot (not necessarily new music, mind), Sharon van Etten (new), Blind Blake (very not new), Chris Connor (I was knocked out by her version of Ornette Coleman’s “Lonely Woman”, look out for her gig with Maynard Ferrguson too). I discovered Sabicas (flamenco legend) a while back, and that’s a pretty good soundtrack for coding, though it can be a bit of a rollercoaster. Too much to mention really but lots of the time I’m listening to things to learn on guitar. Lots of Nic Jones… it goes on.

Go give Mashificator a try!

Categories
Interactive Media Mobile

WaterWorx – our first in-gallery iPad interactive at the Powerhouse Museum

Last week we were installing our first deployment of iPads as gallery interfaces – and they went live on Friday night.

Now in the newly refreshed Ecologic exhibition – open right now – you can play a game called WaterWorx deployed to a table of 8 iPads.

WaterWorx is intended to convey the difficult of managing an urban water system – dams, water towers, water filtration, sewage treatment, and storm water – with a growing population. Using simple game mechanics the water system is turned into a mechanical operation where the player’s hands are used to control and balance an increasingly more difficult set of tasks.

Here’s a video of the gameplay.

Other than the obvious – deploying iPads in the gallery – I’m particularly excited about this project for a number of meta-reasons.

Firstly, this is the deployment of consumer technologies as interfaces. This brings with it an explicit acknowledgement that the entertainment and computing gear that visitors can get their hands on outside of the museum is always going to be better or at least on par with what museums can, themselves, deploy. So rather than continue the arms race, the iPad deployment is a means to refocus both visitor attention and development resources on content and engagement – not display technologies. Also, it picks up on the visitors’ own understanding of these devices and uses it to piggyback on those behaviours – whilst allowing us to leverage the existing consumer interest in the device in the short term.

Secondly, the process by which this game was developed was in itself very different for us. WaterWorx was developed by Sydney digital design agency Digital Eskimo together with a motley team from the Powerhouse’s curatorial and web teams, and programmed by iOS developer Bonobo Labs. Rather than an explicit and ‘completed’ brief be given to Digital Eskimo, the game developed using an iterative and agile methodology, begun by a process that they call ‘considered design‘. This brought together stakeholders and potential users all the way through the development process with ‘real working prototypes’ being delivered along the way – something which is pretty common for how websites and web applications are made, but is still unfortunately not common practice for exhibition development.

There’s also a third exciting possibility – the game might be re-engineered for longer term and repeat play – and released to the AppStore down the track. Obviously this requires a rethinking and ‘complexify-ing’ of the game dynamics and an emphasis on providing incentives and leveling up for repeat play.

I came in this morning to see a large giggle of school children clustered around them playing them furiously – looking deeply engaged. And that’s the most valuable outcome of all.

There will be some future blogposts with the curatorial and web staff involved in the game development shortly too.

UPDATE (5/11/10) – we’ve just added a new post that shows the honeypot effect that this interactive is creating.

Categories
Collection databases User behaviour Web metrics

Actual use data from integrating collection objects into Digital NZ

Two months ago the New Zealand cultural aggregator Digital NZ ingested metadata from roughly 250 NZ-related objects from the Powerhouse collection and started serving them through their network.

When our objects were ingested into Digital NZ they became accessible not just through the Digital NZ site but also through all manner of widgets, mashups and also institutional website that had integrated Digital NZ’s data feeds.

So, in order to strengthen the case for further content sharing in this way, we used Google Analytics’ campaign tracking functionality to quickly and easily see whether users of our content in Digital NZ actually came back to the Powerhouse Museum website for more information on the objects beyond their basic metadata.

Here’s the results for the last two months.

Total collection visits from Digital NZ – 98 (55 from New Zealand)
Total unique collection objects viewed – 66
Avg pages per visit – 2.87
True time on site per visit (excluding single page visits) – 11:57min
Repeat visits – 37%

From our perspective these 55 NZ visitors are entirely new visitors (well, except for the 8 visits we spotted from the National Library of NZ who run Digital NZ!) who probably would never have otherwise come across this content so that’s a good thing – and very much on keeping with our institutional goals of ‘findability’.

For the same period, here are the top 6 sources for NZ-only visitors to the museum’s collection (not the website as a whole) –

(click for larger)

Remember that the Digital NZ figure is for around only 250 discrete objects and so we are looking at just under 1 new NZ visitor a day to them via Digital NZ, whereas the other sources are for any of the ~80,000 collection objects.

However, I don’t have access to the overall usage data for Digital NZ so I can’t make a call on whether these figures are higher, lower, or average. But maybe one of the Digital NZ team can comment?

Categories
Mobile User experience

On augmented reality (again) – time with UAR, Layar, Streetmuseum & the CBA

Jasper Visser from the Nationaal Historisch Museum in the Netherlands has nailed some of the problems with augmented reality in his recent blogpost – ‘Charming tour guide vs mobile 3D AR‘.

Jasper compares the analogue world experience of a guided architectural tour with the digital experience of using the Netherlands Architecture Institute’s UAR application to plot a similar ‘tour’. This isn’t really a fair comparison but it does raise some serious questions about appropriateness of technology and the kind of user experience we are trying to adapt/adopt/create.

The Netherlands Architecture Institute’s UAR application, built on Layar, is perhaps the best augmented reality application by (or for) a museum I’ve seen and tried thus far. It narrowly beats out the Museum of London’s Streetmuseum – largely because it looks to the future in terms of content as well as in technology.

As I laboured over my presentation at Picnic ’10, the problem with a lot of these augmented reality and mobile apps that museums are doing is that they face a huge user motivation hurdle – ‘why would you bother’? Further, many of the ‘problems’ they try to solve are more effectively/effortlessly solved in other more analogue ways.

Our very own Powerhouse AR experiment with Layar is clunky and honestly, beyond the technological ‘wow’, it doesn’t have a lot of incentive to boot it up that important second time. That might sound critical but needs to be put into the context of it being a) an experiment, b) and having no budget allocation.

Earlier in the year in London I couldn’t get the MOL’s Streetmuseum to work properly on my iPhone 3GS but on my last visit, now with some updates and an iPhone4, I was able to get some serious time in with it.

Streetmuseum has been a brilliant marketing campaign for the Museum of London. It has generated priceless coverage in global media and in so doing associated the Museum of London with the notions of ‘experimentalism’, ‘innovation’, ‘new technology’. And the incorporation of Streetmuseum into the campaign strategy for the launch of the excellent new galleries has been very effective and synergistic.

It has also, demonstrated that there can be an interest in heritage augmented reality – even if it doesn’t quite work the way you’d hope it would.

However, like all these apps – from a user experience perspective the app is clunky and aligning the historic images with ‘reality’ in the 3D view is an exercise in patience. The promotional screenshots don’t convey the difficulty in real world use. As a result the app 3D view, the most technically innovative part of the app, ends up being a gimmick.

However the 2D map view is far more useful and, for the most part, the 2D is very rewarding. And for the committed, walking around London and revealing the ‘layers of history’ can be compelling.

Compared to our the Powerhouse layer in Layar, though, Streetmuseum is, excuse the pun, streets ahead (not surprising given the investment). Streetmuseum’s eschewing of a platform approach of using Layar and building its own system might not be the most long term sustainable strategy but it certainly delivers a far better experience than Layar. Of course, it is such early days in this space that Layar isn’t exactly a long term strategy either.

Mac Slocum over at O’Reilly raises some similar issues.

That’s the problem with app-based AR: even when the app is interesting and the implementation is notable, it’s hard to get people (like me) to use it consistently. AR ambivalence is also tied to the bigger issue of app inertia. A company that pours resources into a custom app doesn’t get much return if that app is rarely launched; the user doesn’t develop an affinity for the brand, and that same user certainly doesn’t buy associated products. The app and its AR just sit there, waiting to be uninstalled.

In my Picnic ’10 presentation I briefly showed the CBA’s Property Guide app. Although this is far from a novel idea (in fact property prices were one of the first things in Layar), the implementation is rather good and points to several things for the cultural heritage sector to take note of.

First it addresses something with a clear existing demand – Australians’ obsession with property prices. Second, it manages to surpass your expectations of the available data – by providing, free of charge, access to ‘good enough’ data for almost every house in the street.

When I first booted up the CBA app I expected to get patchy data for my chosen area. Properties near me sell reasonably frequently but also many people stay in the same place for a long time. So you can imagine my surprise when I was able to see that the last time a place near me sold was in 1984 and for ‘between $30,000 and $40,000’ – as well as every single property up my street. That sort of data usually isn’t available – even in tabular form for purchase.

So how might that play out for cultural heritage AR?

Well, I think for a start it means cross-institutional applications and cross-institutional data. There is no technical reason why the same level of data that the CBA app has access to isn’t available for heritage.

Just thinking of the existing rudimentary ideas about these kinds of apps – the ‘Then & Now’: the local council archives are probably a good place to start and work up the food chain to the big institutions. ‘A photo and a title deed of every property’ . . . . it is only a matter of time.

But addressing the ‘demand’ issue is another matter altogether.

Categories
Social media

On chocolate cakes, journalism and co-curating museums

Here’s a great piece from the Nieman Journalism Lab on the New York Times’ community-sourced recipe book – dug out of tens of thousands of records in their archives.

If you change your working relationship to your audience, you will understand that audience in a new way. The tools that support those two steps also support collaborations that produce insights not likely to be found any other way, framed in genres altered by collaboration and by the social tools that made it possible. Tools, genres, partnerships, models of authority and active citizenship all change, and so does the community’s understanding of itself and its history at the same time.

For those who have learned how to look, the Internet reveals layers of inventive food culture liberated from traditional limitations — including the journalist’s earlier understanding of audience — by new speed of publishing, connectivity, innovation . . . Hesser’s team saw need, opportunity, and tools in place to create a new genre of participatory cookbook writing, too, on the Internet …an online platform for gathering talented cooks and curating their recipes…a new community-building venture…It would be democratic and fun…and together they would produce cookbooks without giving all the authority back to experts. Once again, Hesser had the experience of asking people to join in and finding that they loved being invited.

The parallels to the changes in museums – first rise of education and public programmes, and in recent times the rise of the social web and co-curation – are obvious.

It reminded me of John Fiske’s comments, predating the social web, way back in 1989, from Reading the Popular (Routledge);

The resources – television, records, clothes, video games, language – carry the interests of the economically and ideologically dominant; they have lines of force within them that are hegemonic and world in favour of the status quo. But hegemonic power is necessary, or even possible, only because of resistance, so these resources must also carry contradictory lines of force that are taken up and activated differently by people situated differently within the social system. If the cultural commodities or texts do not contain resources out of which the people can make their own meanings of their social relations and identities, they will be rejected and will fail in the marketplace. They will not be made popular.

(emphasis mine)