You can fix the colors. You can remove the scratches. You can even sharpen the whole image until it looks crisp. But if the faces still look weird? The entire restoration feels off. Faces are the thing people look at first in any photo, and they're also the hardest part for AI to get right. So when it comes to picking the model that handles faces best, it really does matter.
Why Faces Are the Hardest Part to Restore
Think about it. You can look at a photo of a tree with some blurry leaves and your brain just fills in the blanks, no big deal. But a face? Even the tiniest thing being slightly off and you notice it immediately. An eye that's a little too smooth, a jawline that doesn't quite match, a nose that somehow looks like it belongs to a different person. Our brains are wired to pick up on facial details at an almost absurd level of sensitivity.
And old photos make this ten times harder. You've got fading, cracks running right through someone's cheek, water damage that's basically erased half an eyebrow. The AI has to somehow figure out what was there originally and reconstruct it in a way that still looks like the actual person. Not just any generic face. Their face.
That's a really, really hard problem.
What Is Blind Face Restoration Anyway?
You'll see this term thrown around a lot, so here's the simple version. "Blind" face restoration means the AI doesn't know what the original undamaged face looked like. It has no reference photo to work from. It's essentially guessing, but it's a very educated guess based on having seen millions of faces during training.
The model looks at whatever degraded, blurry, cracked mess you give it and tries to reconstruct realistic facial details from scratch. It's kind of like asking someone to repaint a portrait that's been left out in the rain for 50 years, except the painter has studied millions of other portraits and has gotten pretty good at knowing what goes where. Both GFPGAN and CodeFormer are blind face restoration models, but they approach the problem differently and that's where things get interesting.
GFPGAN: The Identity Preserver
GFPGAN (Generative Facial Prior GAN) came out of Tencent's research lab and honestly it was a game changer when it dropped. The big thing GFPGAN does well is preserving identity. When you run a photo through GFPGAN, the person in the output still looks like the person in the input. That sounds obvious, but plenty of face restoration models struggle with this.
It works by using a pretrained face generation model (basically StyleGAN2) as a kind of reference library of "what faces should look like," and then it carefully balances between using that knowledge and staying faithful to whatever details are still visible in your damaged photo. The result is usually a face that looks natural, has good texture and detail, and most importantly still looks like the right person.
Where GFPGAN struggles is with really severe damage. If someone's face is basically a smudge, with half the features completely gone, GFPGAN can produce results that look a bit soft or uncertain. It's conservative by nature. It would rather give you something slightly blurry than risk hallucinating features that change who the person looks like. Which honestly is a pretty reasonable trade-off, but it means heavily damaged photos sometimes don't come out as sharp as you'd want.
CodeFormer: The Heavy Damage Fighter
CodeFormer took a different approach. Instead of using a GAN-based prior like GFPGAN, it uses a Transformer-based architecture with a learned discrete codebook. I know that sounds like a mouthful. But basically, CodeFormer learned a "dictionary" of facial features during training, and when it encounters a damaged face, it looks up the closest matching features from its dictionary and pieces them together.
The cool part is that CodeFormer has a "fidelity weight" you can adjust. Crank it up and it stays closer to the original (even if that means less detail). Turn it down and it relies more on its codebook to generate sharp, detailed faces, even if the original was barely recognizable. This flexibility is genuinely useful because different photos need different amounts of creative reconstruction.
Not gonna lie, the results on really messed up faces are pretty impressive. Photos where GFPGAN might give you a soft blob, CodeFormer can produce a surprisingly detailed, realistic face. But here's the catch: sometimes the face it produces doesn't quite look like the original person. It's a great-looking face. It's just... not always the right one. For family photos where identity matters, that can be a real problem.
So When Should You Use Which?
Here's the practical breakdown that I wish someone had given me when I first started comparing these two.
If your photo is mildly to moderately damaged and the face is still recognizable, GFPGAN is usually the better pick. It'll clean things up, sharpen the details, and the person will still look like themselves. For family portraits where grandma needs to look like grandma, GFPGAN is your friend.
If the photo is severely damaged and the face is barely there, CodeFormer with a lower fidelity weight can pull out details that GFPGAN simply can't recover. You might lose some identity accuracy, but you'll get a face that actually looks like a face instead of a blurry smudge. For group photos where a face in the background is tiny and destroyed, CodeFormer tends to do better.
And honestly? For a lot of photos, the difference is subtle enough that either one works fine. The gap between these two has narrowed over the past couple years as both have been refined. The biggest differences show up at the extremes: lightly damaged (GFPGAN wins on identity) and heavily damaged (CodeFormer wins on detail).
How ClearPastAI Handles This Under the Hood
One thing that can be frustrating about all this is that most people don't want to manually pick between models and tweak fidelity weights. You just want to tap a button and have your photo look better. That's pretty much the approach ClearPastAI takes. The app analyzes your photo's damage level and applies face restoration intelligently, using techniques inspired by both of these research models to balance identity preservation with detail recovery.
The key insight is that face restoration shouldn't happen in isolation. ClearPastAI processes faces as part of a full restoration pipeline that also handles scratches, color correction, and overall enhancement. When the face restoration step knows what the rest of the image looks like, it can make smarter decisions about tone matching and detail levels so the face blends naturally with everything around it instead of looking weirdly pasted on.
You don't need to know whether GFPGAN or CodeFormer is right for your photo. You don't need to fiddle with fidelity sliders. You just open the app, pick your photo, and the AI figures out the best approach. Which, to be fair, is how it should work for most people.
Restore Faces in Your Old Photos Automatically
Skip the technical details and let ClearPastAI handle the face restoration for you. The app intelligently enhances faces while keeping people looking like themselves. Try it free on your iPhone and see the difference on your own family photos.
Try ClearPastAI Free on iOS