News and events:

Secrets of animal camouflage research

Secrets of animal camouflage research - 6 August 2014

Video special reveals how ambitious field project models predator vision.

How do animals see? It's a question that has vexed biologists and fascinated anyone who has watched an animal go about its business: what does the world look like through their eyes?

It may seem like an impossible question to answer, but BBSRC-funded scientists are attempting to answer some of the fundamental questions about camouflage by understanding how different visual systems see the world.

Video

  Secrets of animal camouflage research

You need to have JavaScript enabled to view this video.

The researchers have travelled across Africa and taken over 14,000 images and hours of video footage to catalogue what predators are able to see the hidden eggs of different ground-nesting birds. Back in the lab, they then use specially customised software to recreate the visual world of the predators, analysing what makes objects blend in or stand out from their backgrounds to predator visual systems based on real field data (see 'How do animals see? Q&A with Dr Jolyon Troscianko'); the first time camouflage data has been directly linked this to survival rates of real animals in the field.

The researchers, from the University of Exeter and the University of Cambridge, have taken the analysis one step further by recruiting another predator: humans. By playing the 'citizen science' game Egglab, people (who are predators too and possess visual systems like the vervet monkeys seen in the video above) can take their place in the evolutionary tree and spot eggs in images derived from the research (see 'Egglab in numbers'). The eggs even 'evolve' as the game progresses, yielding yet more data on how types of camouflage evolve in different habitats.

Field works

Dr Jolyon Troscianko from the University of Exeter's Sensory Ecology and Evolution group says Project Nightjar came about because of theories about how camouflage works to evade different visual systems hadn't been tested in the wild. "It's very difficult to find a study system where you can link predation with the quality of an animal's camouflage," he says. Troscianko filmed most of the footage in the film above in Zambia and South Africa.

Along with co-principal investigators Dr Martin Stevens, also from the University of Exeter, and Dr Claire Spottiswoode from the University of Cambridge, the team developed a study system using two classes of ground-nesting birds inspired by Spottiswoode's previous encounters with nightjars in Africa (see African Cuckoos for more). "I was bowled over by their camouflage, which led to discussions with Martin about how we could take advantage of these birds for camouflage research, leading to our collaboration on the current project," she says.

Nightjars sit tight on their eggs and rely on the camouflage of the adult bird to outwit their predators. As a result, their eggs are less camouflaged because the adults do the work. You can see examples of this in our popular one-minute Can you spot the bird? video.

In contrast, plovers and coursers run from their nests when danger approaches. Their much more exposed eggs have therefore evolved better camouflage to blend in with their backgrounds, saving the developing chick inside. In each case, it's an arms race between the patterns, colours and contrast of the eggs and the visual acuity of the predators.

Copyright: J. Troscianko
How a bird's eggs might look to a ferret (left), human (middle) and peafowl (right). Copyright: J. Troscianko

But some predators are active mostly at night. So how to know which predator's vision to attempt to simulate? The next step of the research required hidden cameras to see what predators were eating the eggs. Troscianko recalls that this was a more difficult task than first appears. "In the day often the motion trigger was too hot so it would be triggering constantly, so it was footage of a bird just getting hot in the sun," he says. "Predation events are unpredictable, which is why this is such a difficult project that has never been done before."

Troscianko's colleague Spottiswoode, who filmed nightjars and some other footage in the film above, encountered similar problems. "The hardest thing about fieldwork in Zambia is the relentless logistical challenges – endlessly fixing cars, maintaining equipment, and managing our brilliant team of nightjar nest-finders… sometimes to the point of bailing them out of jail!" She adds that it's all worth it to spend all day in the bush with fascinating natural history to give you new ideas about your current questions and inspires future ones.

How do animals see? Q&A with Dr Jolyon Troscianko

Armed with footage of the predators in action and more than 14,000 images of the birds' eggs in different environments, Troscianko returned to the lab to begin simulating how the animals see.

How do you convert your eggs images into a simulated view as animals would see?

First we work out how sensitive the camera is to different wavelengths of light, so we can simulate how the camera would see thousands of natural objects under specific lighting conditions. We also model how the same objects would look, for example, to a bird, a fish or an insect under exactly the same lighting conditions.

These two datasets let us work out how to map across from camera vision to animal vision. The modelling is computer based, and to do it we need to know the sensitivity of the camera and the animals' eyes to different wavelengths of light."

Copyright: J. Troscianko
Dr Jolyon Troscianko pictured in normal (top) and UV (bottom) light. Copyright: J. Troscianko

How do you find out animal eye sensitivity to different wavelengths of light?

This can be done using a range of different experimental methods, and we base our models on previously published findings. The sensitivity of the pigments in the cone receptors (the cells in our eyes sensitive to light in daytime conditions) can be calculated using microspectrophotometry that works out what wavelengths of light the different cells absorb.

Another method is to use electroretinogram flicker photometry, where light of a single wavelength is flickered at live cone cells, and electrodes are used to see if the cone cells are firing neural signals in time with the light flickering. Then the wavelength is changed so a complete sensitivity map can be made.

But then we also need to factor in the colour-filtering oil droplets that most other vertebrates have in their cones (us placental mammals have lost these really cool filters that improve colour discrimination in many animals), and how the optics of the eye itself absorb different wavelengths. Finally, behavioural experiments can be used in some situations, to see how animals respond as lights of a specific wavelength are dimmed.

What is the hardest part of this complex work?

The hardest bit is getting the images standardised and aligned perfectly before feeding them into the animal vision model, and then carefully cutting out the eggs and nightjars from their backgrounds. For every animal vision image we need to take at least two photos: one with the camera using a 'normal' visible spectrum filter, and another using a filter that only lets through ultraviolet (UV) light.

We needed to make sure the camera didn't move between shots, that the thing we were photographing didn't move, that the lighting conditions didn't change, and we needed to re-focus slightly in UV because of the different refractive properties of UV light. The refocusing also zooms the image in slightly, so the UV photo needs to be very carefully scaled and aligned with the visible image. I had to write my own computer script for doing this automatically because the existing tools just weren't accurate or reliable enough.

And what do you enjoy the most?

The bit I like the most is writing code that makes life easier, and produces better output. It's incredibly satisfying spending a few hours writing something that shortens a big processing job by days, or makes sure every little bit of information in the images can be used.

What do you base your assumptions about animal vision on?

We try to make as few assumptions about animal vision as possible. We can fairly easily work out how the cones in the eye respond, but all the neural architecture and visual processing after that is still poorly understood, even in humans. There are some sensible principles to go by though, such as how colour discrimination must be governed by the relative numbers of different cone types in the eye, and how the neurological processing of patterns at different scales is likely to work.

How do you go about constructing the final animal vision images?

We convert all our egg and background images into three visual systems that represent the main predator groups; these were ferret (in place of mongoose vision), humans (for human and primate predators), and peafowl (a commonly used bird visual system).

Then we process the images to extract things like pattern information (how much pattern 'energy' there is across a range of spatial scales, or marking sizes), or identify the predominant colours.

Next we compare the eggs to their background, and see whether a better colour match, or a better pattern match was more likely to protect their nest from being spotted by predators. So our ability to test what aspects of camouflage are most important for protection the wild rely on a huge amount of previous research into animal vision and image processing.

Watch this one-minute video animation of egg camouflage deconstruction from Troscianko's analysis:

Video

  Egg camouflage animation

You need to have JavaScript enabled to view this video.

What specialised image processing software do you use?

We do all of our image processing in a program called ImageJ. It's a great open-source image processing platform that's powerful and fast compared to the expensive commercial software we used to use. There are some built in functions that make some tasks nice and easy, but most of the processing is done by code I've written that gives me complete control over the pixels as they're transformed from a huge pile of photos into numbers ready for statistical analysis (that we do in R – another wonderful piece of open source software).

Some of the tasks are very processor intensive – particularly working out the main colours in each image. These big jobs can be controlled by a batch script, so we can just leave a powerful computer crunching the numbers for a few days.

Serious games

The taxing analysis of the egg photographs got the researchers thinking about other ways to analyse the data. Inspired by previous 'citizen science' games developed by the group (see Where is that nightjar and Where is that nest, both of which used real images from the field, the group set about hosting a next-generation game to yield more insightful data.

"The idea really came as a follow on from some of the earlier games we'd done presenting images of real camouflaged animals," says Dr Martin Stevens of the University of Exeter. "I thought it would be great to do something more ambitious, whereby we could actually create a game with camouflaged prey that evolve over time to study how camouflage changes in different environments."

Click here to play Egglab. Copyright: University of Exeter
Click here to play Egglab. Copyright: University of Exeter

The main aim of Egglab is to see how different types of camouflage, such as background matching or disruptive coloration (see the images on this page for examples) evolve in different habitat types. "We wanted to know what happens when an animal is found on more than one background (a generalist) as opposed to specialising on just one background," says Stevens. "Generalists should evolve camouflage types that should work over different habitats, but it's unclear how this should be done."

Stevens says that the eggs are all made by computer algorithms. The colours and patterns are encoded by an underlying 'genome' and built by combining base images and colours together using simple programs that can be automatically generated. These programs can be copied and given small random alterations that cause differences in the patterns from one generation to the next – just as organisms evolve in the real world.

In one generation, each egg is tested several times over the course of multiple games played by different people, and then the eggs are ranked by average detection times – the longer detection times equalling better camouflage. The most successful eggs replicate, some with mutations to their appearance, and form the next generation so the eggs evolve better camouflage and longer search times.

Stevens highlights that for the citizen science work they teamed up with Dave Griffiths, an award-winning computer programmer and games designer from the FoAM Kernow company that researches and develops open source computer games, music and livecoding technology. There is more on the technical aspects of game development, see Genetic programming egg patterns in HTML5 canvas and More procedurally rendered eggs in HTML5 canvas.

So far, the researchers have found that colour and contrast are often linked, and both are important. "What seems to be happening so far is that disruptive coloration often arises on a range of background types, but is especially common when the background is more variable (leaf litter)," says Stevens. "Background matching seems especially important on backgrounds that are more uniform (open ground), and when individuals are generalist and eggs evolve against two background types, the camouflage takes longer to improve."

The researchers are still learning a lot from the Egglab game about how camouflage is optimised and works in different habitats, and how the behaviour of an animal and background specialisation can affect this (see above). In addition, the previous games showed that the detection of camouflaged prey depends on the visual system present and the type of camouflage involved.

Egglab in numbers

Total no of players: > 8000
Hours played: 6.5 days of actual search time (150 hours)
Egg populations evolving separately: 60 (20 per experiment/nightjar type)
Eggs spotted: 356,235
Total egg generations generated: 561
Average egg spotting time: 0.5-2.8 secs

Stevens says the citizen science games and rigorous fieldwork are complimentary. "The fieldwork looks at how camouflage of real animals in the wild affects how likely they are to be eaten by a range of predators, and how camouflage is influenced by behaviour and nesting strategies of the birds. The egg game looks at how camouflage evolved against different habitats under controlled conditions."

The relationship between camouflage success, background and animal vision is further complicated by differing visual systems. Primates such as vervet monkeys, baboons and humans have three-colour trichromatic vision. Other mammals such as the mongooses in the video above have simpler dichromatic vision (two colours such as blue and yellow). But birds (and other animals) have tetrachromatic vision, seeing in the red-blue-green wavelengths of light that we see in plus ultraviolet (UV) light. (See this video on the BBSRC-funded discovery of UV vision in reindeer).

Initial data analysis of both the fieldwork results and the initial citizen science games from Troscianko suggests that aspects of egg patterning is a trade-off between hiding the eggs from different predators – mammalian and bird visual systems for example – a classic case of not being able to fool everybody all of the time.