As photography has gone digital, it has become ever easier to manipulate images with Photoshop and other technology. Digital photographs used in the news industry are often adjusted for reasons of aesthetics—a contrast adjustment here, a color-alteration there. But they can also be altered with the aim to deceive editors or readers. Luckily, digital detection technology is quickly advancing, as well.

Hany Farid, a mathematician and digital forensics specialist who teaches computer science at Dartmouth University, has developed a host of tools to accurately identify images that have been altered. He will be speaking—along with Santiago Lyon, director of photography for the AP—at an MIT symposium on April 5 called “Ethics and Forensics in the Age of Photoshop Photojournalism.” Assistant editor Lauren Kirchner spoke with Prof. Farid to learn more about the science involved in photo forensics. This is an edited transcript of that conversation.

Can average news readers or viewers ever tell whether a photograph has been altered? Are there tell-tale signs to look for?

Yes and no. Your brain is actually fairly good at noticing certain inconsistencies in a photo. For example, we’ve all seen what I like to call the “floating head syndrome,” where someone’s head is pasted onto someone else’s body, and it looks disembodied, like it’s literally floating. Those things just pop out at us, and we don’t need anyone to tell us there’s something wrong there. But those examples are deceptive, because those are just examples of a bad fake. The problem is, there are other aspects of the visual system—where your brain processes images—where it’s just really bad at determining whether something is consistent or not. We have done a variety of studies and developed forensic software to determine how good people are at visually assessing authenticity. There are things we are very good at, and there are other things we are very bad at.

For example, we’re very bad at light and shadows. If I show you a photograph, and I ask you, “Are the shadows here consistent or inconsistent?”, you basically will have no idea. You just can’t tell. But here’s the really dangerous part: it’s not just that you can’t tell, it’s that you’re consistently wrong. You will look at something where the shadows are absolutely correct, and you will think, “Nope, something’s wrong here,” and you will say that consistently. So it’s worse than guessing, because you are wrong, you are sure that you’re right, and that’s the worst combination. I like to call that “the arrogance and ignorance effect.” And so the danger of relying on your brain to assess authenticity based on things like shadows and perspective and texture and lighting is that we’re just not actually that good at it.

So while bad fakes are very easy to detect, good fakes are very difficult to detect; and, worse, really good pictures are often said to be fake, because of this sort of failure to reason about things like lighting and reflections and shadows and perspective. You do see this effect now in photojournalism, where everybody now sees a remarkable photograph and says “No, that can’t be real.” Now there’s almost a knee-jerk reaction in the opposite direction.

So if those are not reliable ways to detect a fake, then what are?

That’s the beauty of mathematics and physics and computer science; we can quantify and measure whether things are consistent or not. We know how to write down equations that quantify how shadows are cast, and we know how to write down equations that describe perspective projection, and we know how to write down equations that describe jpeg compression, and so on and so forth. So with all this we can actually determine whether these things we see are physically correct or incorrect. Now, the issue with these tools, of course, is that they are not at the stage yet where you just push a button and get an answer. It’s not like CSI on TV; it’s actually a fair amount of work.

Lauren Kirchner is a freelance writer covering digital security for CJR. Find her on Twitter at @lkirchner