Introducing AI Powered Delighting

Artomatix R&D has been working tirelessly for more than a year to bring you our latest creation: The Industry’s Most Advanced and Powerful Delighting Solution. We’ve designed it to work on both single photographs as well as scans that contain a height or normal map. Our solution outperforms all others in two key ways (1) It produces a more physically plausible albedo color than any other solution given the same data and (2) it is the first solution that can completely remove hard shadows.

Why is Delighting so hard?

Delighting, or the process of turning a photograph into an albedo map, is a classically ill-posed problem. By ill-posed, we mean there’s only a portion of the information necessary to solve the problem, like putting together a puzzle when most of the pieces are missing. A toy example of an ill-posed problem is: 

What numbers equal seven when summed together?

Many solutions exist:

  • 2.5 + 4.5 = 7
  • 4 + 2 + 1 = 7 
  • 1 + 1 + 1 + 1 + 1 + 2 = 7
  • 7 + 0 = 7
  • ...

In fact, if there are no restrictions on the numbers, there are infinite solutions to this problem. It is impossible to solve! So how do we make the impossible… possible? By adding constraints!

  • The values must be integer
  • The values must be positive
  • There are always two values

Given these three constraints, the solution space is reduced from infinity to four:

  • 7 + 0 = 7
  • 6 + 1 = 7
  • 5 + 2 = 7
  • 4 + 3 = 7

You’re probably wondering why this matters… well, Delighting is basically the same problem. Given a photograph of a material, each pixels brightness can be calculated as a product of incoming light, material reflectance and the structure of the material itself, a simplified equation can be written as:

Pixel Color = Incoming Light * Diffuse Reflectance * Specular Reflectance * Material Microstructure

Our goal is to find the color of light that is reflected by the Diffuse Reflectance term. To find a direct solution, we have introduced constraints through assumptions about what our material and lighting conditions are. Some common assumptions are:

  • Diffuse term is Lambertian
  • No Specular Reflection
  • Light is strongly directional from overhead
  • No Direct Light Occlusion (i.e. shadows)

Given these assumptions the problem space is sufficiently constrained that we can solve the remaining terms such as Material Microstructure (e.g. normal map) and find a plausible Diffuse color term free of any influence from lighting.

In reality, the above assumptions are seldom all true and thus algorithms for removing light from a single image tend to fail. Luckily there are many strategies for capturing the ground truth of these terms through various scanning processes.

Scanning to the rescue… kind of?

There are many ways to capture additional terms in the equation:

  • Incoming Light: Use a Chrome ball
  • Material Microstructure: Use multiple light sources for Photometric Stereo
  • Specular Reflectance: Polarized Light can separate specular from diffuse reflectance
  • … etc.

As these scanning methods are introduced, the problem becomes significantly more constrained and the final results become more physically accurate.

A great example of this is the Unity Delighting tool by Cyril Jover. This approach was tailored for Photogrammetric scans which capture a globally lit texture and a bent normal map (which includes normal and ambient occlusion information). Again, assuming Lambertian diffuse, no specular reflection and no shadows, the Microstructure information and partial incoming lighting information can be used to reconstruct an approximation for the full incoming environment light and then a fairly accurate per-pixel light can be calculated and subtracted from the fully lit input image.

Example of Unity Delighting Tool. We can see the approach produces fantastic results when the assumptions are met, but fails to remove hard shadows when the Direct Occlusion assumption is not true.

Enter the Artomatix Delighting Tool

From the outset we wanted to combine the very best ideas and practices from across the industry and build on that foundation with our own technology. We had three goals:

Physically Plausible Albedo Colors

Single image delighting is a very ill-posed problem. Solutions to date rely on either Naive Assumptions or scanning Ground Truth Data. These are the two ends of the spectrum where we either know the true value for a term or we simply ignore it. In reality, humans are very good at solving these ill-posed problems and we do so by making educated assumptions, e.g. assumptions that are based on learning from past experiences.

Removing the light from a material leaves only pure color information.

We’ve introduced educated assumptions into the process without a human having to explicitly provide them. This significantly constrains the problem space beyond Naive constants and makes single image delighting a possibility for the first time ever!

Works for Photographs and Scans

When we started this project, our only goal was to generate an albedo map from a single photograph. It wasn’t until we were well into development that we realized a large portion of artists needing a delighting solution were scanning their assets using Photogrammetry, not just taking single pictures. While our solution was able to delight Photogrammetric scans by just taking the lit texture as input, it was obvious we could produce even better results by using all the additional information captured in the scan to help guide the algorithms, much like the highly acclaimed Unity delighting tool. This is really where the Artomatix Delighting Tool, within ArtEngine, was born. Our solution defines the new state of the art for single photograph delighting and can improve upon those results even further by including an additional height or normal map.

Additional information can make the delighting prediction more accurate. 

Hard Shadows are Hard

Early on during development albedo creation and hard shadow removal were deeply entangled with each other. However, it became increasingly clear that the two offered value separately and they were broken up into two nodes. As a result, if you’re finding the albedo for an image with no hard shadows, you don’t need to go through the extra process. Alternatively, if you want the shading a microscopic details associated with a real world lit image but don’t want the hard shadows, you can simply remove only them.

Hard Shadow Removal & Albedo Generation + Hard Shadow Removal

When removing shadows, half of the problem is finding the shadows in the first place. Sometimes it’s difficult to tell the difference between a dark surface and a part of a bright surface covered in shadow. By default the Hard Shadow Removal node lets you create a shadow mask by thresholding the pixels based on brightness, this approach generally works but can sometimes lead to less-than-ideal masks. Alternatively, If you have a surface map (either normal or height), then we can use that additional information to run a physical simulation over the surface to find the incoming light direction and then ray-cast physically accurate shadows.

Tweaking the strength of shadow detection to remove more or less features to get a desired result

We would love for you to try this for yourself! Get in touch – and we will send an evaluation copy straight to your inbox.