Printed from : The Leisure Media Co Ltd

17 Jul 2018


Barnes Foundation uses intelligent machines to offer new ways of interpreting art collections
BY Tom Anstey

Barnes Foundation uses intelligent machines to offer new ways of interpreting art collections

Philadelphia's Barnes Foundation art gallery has used machine learning to create an intelligent art critic, with the technology able to interpret and pair digital artwork together to recognise art style, objects and even images of Jesus.

Since its foundation in 1922, the Barnes has built a collection boasting works by the likes of Vincent Van Gogh, Paul Cézzane and Claude Monet. Its founder Albert Barnes had an unconventional perspective on art interpretation, which was based on making visual connections with different works instead of focusing on art styles and time frames.

Created to help users view the artificially generated art collections, the new AI can identify basic elements in an artwork – such as people, objects and animals – which it will then categorise and place in different artificially-generated collections.

The technology, however, sees and interprets things differently to a human. Through computer vision, art is looked at differently, for example, the program interpreting many works by Renoir as being filled with stuffed animals and teddy bears. While on the surface this might seem like a failure in the technology, for Martha Lucy, deputy director for education and public programmes at the Barnes, it supported a theory she had about the artist's work.

"I’ve been working on an essay about Renoir’s obsession with the sense of touch, which I’m trying to link with his desire to revive artisanal values during the industrial era," she said in a blog post.

"A big part of my argument rests on proving (to the extent this is possible) that Renoir was deliberately trying to evoke the sense of touch in his paintings of fleshy naked women. So discovering that the computer was seeing teddy bears – soft things – was good news."

A research team from Rutgers University created the technology, which at its core tries to ascertain visual similarity among objects. The AI differs from a recognition project created by Fabrica for the Tate, which is trained to recognise images using photographs, not art. By comparison, the Rutgers version understands the basics of art and will continue to learn as it takes in more images.

"The Rutgers model will be integral to finding visual connections between works of art. This will most likely be the majority of what a visitor ends up seeing on our website as 'visually similar results'," said Shelly Bernstein, a consultant creative technologist working at the Barnes.

"While web visitors may never discover that computers think our collection is comprised of stuffed animals, we’re curious about this second path and what we may learn from it."

The Rutgers system has had interesting takes on different visual styles of art, sometimes labelling classic artwork as graffiti, while other times tagging bearded faces or similar shapes as Jesus Christ. According to the museum's curators, while sometimes an abstract concept, the computer's method of making visual connections that a human likely would not make, made logical sense.

Using the AI, the Barnes website allows visitors to access digitised versions of its collection. When the user selects a particular piece, it offers it alongside visually related works, which then offers a slider toggle ranging from "more similar" to "more surprising".

"The Renoir teddy bears was a case when the computer saw something that supported (however tenuously) what I was already thinking," said Lucy.

"But what about when it reads the work of art as something that you never would have anticipated? When it perceives something that actually makes you look at a familiar object in a totally new way?

"These weird misreadings could stretch the brain, and this is always good for art history."

To explore the Barnes collection and to try out the tool, click here.


Close Window