Odd terms in the SH irradiance expansion

Here’s one last observation about SH irradiance which is perhaps more a curiosity than anything else. Or at least we couldn’t work out how to make it useful!

Consider the integral for computing the L1 vector of coefficients \vec{\boldsymbol{R}}_1:

    \[ \vec{\boldsymbol{R}}_1=\frac{1}{4\pi} \oint \vec{\boldsymbol{\omega}}\, R(\vec{\boldsymbol{\omega}}) \, d\Omega \]

This looks somewhat similar to the definition of irradiance, and if we dot the whole expression by the normal vector we get:

    \[ \vec{\boldsymbol{n}} \! \cdot \! \vec{\boldsymbol{R}}_1=\frac{1}{4\pi}\oint\vec{\boldsymbol{n}}\!\cdot\!\vec{\boldsymbol{\omega}}\, R(\vec{\boldsymbol{\omega}}) \, d\Omega \]

    \[ =\frac{1}{4}\left( \frac{1}{\pi} \int_{H+}\vec{\boldsymbol{n}}\!\cdot\!\vec{\boldsymbol{\omega}}\, R(\vec{\boldsymbol{\omega}}) \, d\Omega +\frac{1}{\pi}\int_{H-}\vec{\boldsymbol{n}}\!\cdot\!\vec{\boldsymbol{\omega}}\, R(\vec{\boldsymbol{\omega}}) \, d\Omega \right) \]

which yields

    \[ \vec{\boldsymbol{n}}\!\cdot\!\vec{\boldsymbol{R}}_1=\frac{1}{4} \left( I(\vec{\boldsymbol{n}}) - I(-\vec{\boldsymbol{n}}) \right) \]

This is actually quite surprising — what it says is that the L1 band captures the antipodal difference in irradiance perfectly. If you know the exact irradiance in a direction, and you know the L1 vector, then you can compute the exact irradiance in the opposite direction. Or, to put it another way, the only odd terms in the irradiance expansion are the linear terms in the L1 band.

Perhaps this is just a curiosity, but it feels like it might give us a useful clue for improving higher-order SH irradiance models.

Converting SH Radiance to Irradiance

In the last post we explained an alternative, simpler way of defining the Spherical Harmonic basis functions. We are now going to use this definition of the L0 and L1 basis functions to investigate a classic lighting use case: converting radiance to irradiance.

Radiance is the incoming light at a point. Since this is a spherical function, it can be approximated by a truncated SH series, and this approximation can be computed by sampling (in, say, your favourite path tracer or real-time global illumination system). However, for shading geometry we need to know outgoing light, also known as irradiance.

For the purposes of this post we assume the perfectly diffuse Lambertian BRDF, which is a constant function. Therefore irradiance is the following hemispherical integral:

    \[ I(\vec{\boldsymbol{n}})=\frac{1}{\pi}\int_H\vec{\boldsymbol{n}}\!\cdot\!\vec{\boldsymbol{\omega}} R(\vec{\boldsymbol{\omega}}) d\Omega \]

where \vec{\boldsymbol{n}}\!\cdot\!\vec{\boldsymbol{\omega}} is the geometry term.

Computing the SH approximation we can replace R(\vec{\boldsymbol{\omega}}) by R_{sh}(\vec{\boldsymbol{\omega}}) and compute the SH coefficients of independently:

    \[ I_0=\frac{1}{\pi}\int_H\vec{\boldsymbol{n}}\!\cdot\!\vec{\boldsymbol{\omega}}\, R_0 \, d\Omega = R_0 \]

    \[ \vec{\boldsymbol{I}}_1=\frac{1}{\pi}\int_H \vec{\boldsymbol{n}}\!\cdot\!\vec{\boldsymbol{\omega}} \, 3 \vec{\boldsymbol{R}}_1\!\cdot\!\vec{\boldsymbol{\omega}} \, d\Omega =2\vec{\boldsymbol{R}}_1 \]

    \[ I_2^{ij}=\frac{1}{\pi}\int_H \vec{\boldsymbol{n}}\!\cdot\!\vec{\boldsymbol{\omega}} \, \tfrac{15}{2}\omega_i \omega_j R_2^{ij} =\tfrac{15}{8}R_2^{ij} \]

Putting it all together, we find that the irradiance reconstruction is:

    \[ I_{sh}(\vec{\boldsymbol{\omega}})=R_0+2\vec{\boldsymbol{R}}_1\!\cdot\!\vec{\boldsymbol{\omega}}+\tfrac{15}{8} \omega_i \omega_j R_2^{ij}+\ldots \]

Here the radiance reconstruction constants have been absorbed into the irradiance computation. This means we have only one set of constants to apply to convert the measured radiance coefficients into irradiance.

…and we’re done?

For quite a while the Enlighten SH solver generated L1 irradiance using this linear reconstruction. We considered it an optimisation to fold the conversion into the solver itself. However it’s pretty easy to see this has serious problems. One nice feature of our definition for L1 is that we know 0 \le |\vec{\boldsymbol{R}}_1| \le R_0. The lower bound is attained when the lighting is completely symmetrical or ambient (with no preferred direction), while the upper bound occurs if the incoming light comes from a single direction. However, the factor of two in the linear radiance-to-irradiance conversion means that for very strongly directional lighting environments, our approximation for irradiance will be negative in some direction. This is clearly undesirable since irradiance can never be negative in reality, and causes obvious visual artefacts in practice.

Improving L1 irradiance with a non-linear model

To improve this situation, we eventually unbaked the conversion constants in the solver (so it once again generated SH radiance), and applied a more complex conversion to irradiance in shader code. The rest of this post contains the motivation and derivation of this improved L1 irradiance model.

We already know that the formula given above is the best possible linear conversion to irradiance. Therefore we need to look for a non-linear alternative. To narrow this down we’re first going to find a better polynomial approximation for the pure directional case |\vec{\boldsymbol{R}}_1| = R_0. Once we’re happy with that we can generalise as |\vec{\boldsymbol{R}}_1| / R_0 varies. Finally, we need to ensure the energy and dynamic range of the outgoing irradiance are correct for our measured radiance values.

Pure directional irradiance

For an ideal point light source with energy 1 in direction \vec{\boldsymbol{d}}, Lambertian irradiance is given by:

    \[ I(\vec{\boldsymbol{n}}) = \begin{cases} 4 (\vec{\boldsymbol{d}} . \vec{\boldsymbol{n}}) & \text{if}\ \vec{\boldsymbol{d}} . \vec{\boldsymbol{n}} \ge 0 \\ 0 & \text{otherwise} \end{cases} \]

This has a minimum value of 0 (attained on a whole hemisphere), and a maximum value of 4. Compare this to the linear reconstruction for L1 irradiance:

    \[ I_{lin}(\vec{\boldsymbol{n}}) = 1 + 2 (\vec{\boldsymbol{d}}.\vec{\boldsymbol{n}}) \]

This has a minimum of -1 and a maximum of 3. Although this integrates to the correct energy as written, when we use it in practice in a shader we’ll clamp negative values to 0, with the net result that the total energy of the function will be too high. Therefore the linear reconstruction gives us the wrong energy, and the wrong dynamic range (incorrect maximum value).

Best smooth approximation

To find our non-linear approximation in the pure directional case, let’s first specify that the approximation is smooth. This makes sense since we want it to be one endpoint of a smoothly varying overall model. To satisfy this, we will try to fit a polynomial function f (which will automatically be smooth) in the variable x = \vec{\boldsymbol{d}}.\vec{\boldsymbol{n}}.

There are constraints on this function that we already know: the total energy (integral under the curve) should be 2, the minimum value 0 and the maximum value 4.

Additionally, we are going to specify derivative conditions at -1 (the point opposite the light source). The true irradiance is 0 on the whole back-facing hemisphere. We can’t attain that with a polynomial, but we can at least specify that f'(-1) = f''(-1) = 0.

This gives us five constraints to satisfy, so we can fit a unique quartic ax^4 + bx^3 + cx^2 + dx + e. Solving for the coefficients gives us:

    \[ f(x) = \frac{1}{2} (1 + x)^3$ \]

Generalise as length of L1 vector varies

We now know what to do at the two extreme cases. When |\vec{\boldsymbol{R}}_1 | = R_0 we can plug in our non-linear approximation. Alternatively, if |\vec{\boldsymbol{R}}_1 | = 0 we have no directional variation at all, so irradiance is simply a constant function. But what should be do in between the two extremes?

For each value in between there is a range of possibilities, from a linear model to something “as non-linear” as our extreme model. It’s reasonable to suppose that our model should become more linear as it becomes more ambient (less directional). So let’s define the model power to be:

    \[ p = 1 + 2 \frac{| \vec{\boldsymbol{R}}_1 |}{R_0} \]

This varies between 1 and 3 based on the length of the L1 vector. Our final model will then be based on (1+x)^p where x = \hat{\boldsymbol{R}_1}.\vec{\boldsymbol{n}}, that is the dot product with the normalised direction of the L1 vector.

At this point it’s useful to introduce a new variable q = \frac{1}{2}(1 + x), so the model is of the form a + bq^p where a and b are constants we need to find.

We have two constraints to apply to find a and b: total energy, and dynamic range.

We are assuming that the total energy is 2 (only the length of L1 is varying, not L0). Therefore we want to find b so that \int_{-1}^{1} bq^p dx = 2. It follows that b = p + 1. Our model is then a linear interpolation between the ambient and directionally-varying terms:

    \[ a + (1-a)(p+1)q^p \]

The correct dynamic range (maximum – minimum) varies linearly with the length of L1 and equals 4 \frac{|\vec{\boldsymbol{R}}_1 |}{R_0}. The dynamic range of the model is (1-a)(p+1). Substituting in the definition of p and solving for a gives:

    \[ a = (1 - \frac{|\vec{\boldsymbol{R}}_1 |}{R_0}) / (1 + \frac{|\vec{\boldsymbol{R}}_1 |}{R_0}) \]

Final improved L1 irradiance model

Putting it all together, the final model is:

    \[ I_{sh}(\vec{\boldsymbol{n}}) = R_0 (a + (1 - a)(p + 1)q^p ) \]

where

    \[ q = \frac{1}{2}(1 + \hat{\boldsymbol{R}_1}.\vec{\boldsymbol{n}}) \]

    \[ p = 1 + 2 \frac{| \vec{\boldsymbol{R}}_1 |}{R_0} \]

    \[ a = (1 - \frac{|\vec{\boldsymbol{R}}_1 |}{R_0}) / (1 + \frac{|\vec{\boldsymbol{R}}_1 |}{R_0}) \]

Conclusion

We’ve discussed how to convert SH radiance (incoming light) into SH irradiance (outgoing bounced light), assuming a diffuse Lambertian BRDF. The natural, linear best-fit method generates functions which can attain negative values, which in the case of light is an obviously undesirable property.

In the case of L1 we fix this with a non-linear reconstruction, fitting the model to the measured direction, energy and dynamic range. This model gives very good “bang for the buck”: it produces decent shading quality, despite only using a small number of coefficients (four per colour channel).

Unfortunately, it’s frustratingly unclear how to extend this non-linear reconstruction naturally to higher SH bands. Instead, the most practical current approach is to remove negative values by “de-ringing”; Peter-Pike Sloan presented a robust numerical method for doing this at SIGGRAPH Asia 2017.

Alternative definition of Spherical Harmonics for Lighting

The Spherical Harmonic (SH) series has proved itself a useful mathematical tool in various domains like quantum mechanics and acoustics, as well as in lighting computations.

However the traditional definition from quantum mechanics (other variations exist) is somewhat intimidating:

    \[ Y_{\ell }^{m}(\theta ,\varphi )=(-1)^{m}{\sqrt {{(2\ell +1) \over 4\pi }{(\ell -m)! \over (\ell +m)!}}}\,P_{\ell }^{m}(\cos {\theta })\,e^{im\varphi } \]

The constants here are well-motivated: they are required so that quantum mechanical probability distributions are normalised to 1… but if we blindly plug this definition straight in for a 3D graphics/lighting use case, we end up evaluating lots of trigonometric functions and carrying around complicated factors of \pi that aren’t necessary. In the literature, these constants are often quoted as decimal quantities without explanation, making everything even more confusing. If instead we make a simplifying redefinition, our SH coefficients become easier to interpret and the computations require no trigonometry and fewer magic constants (and thus, hopefully, less debugging…)

Background

The Spherical Harmonic (SH) series of functions is the analogue of the Fourier Series for functions on the surface of a 2-sphere {S}^2 = \{ (x, y, z) \in \mathbb{R}^3 : x^2 + y^2 + z^2 = 1 \}. The complete set of functions is an infinite-dimensional basis for functions on the sphere, but in practical use the series is truncated to give an approximation of an arbitrary function by a finite weighted sum of basis functions.

In computer graphics, spherical harmonics are used as a form of compression, which in turn greatly accelerates computations. For instance, the incoming light at a point in space is a spherical function (since it varies with direction), but using SH its approximation can be compactly represented by a handful of coefficients (the weights for the first few basis functions). This allows computations to be performed on a vector of these SH coefficients, rather than the spherical functions themselves.

Spherical harmonics are a good choice for this compressed representation because the SH approximation is invariant under rotation. That is, the SH approximation of a rotated function is the same as the rotated SH approximation of the original function.

Spherical Harmonic Bands

The Spherical Harmonic series is composed of separate “bands” of functions, usually denoted L0, L1, L2 etc. The L0 band is a single constant function; the L1 band consists of three linear functions; the L2 band contains five quadratic functions; and so on. For real-time lighting it is unusual to go past the L2 band (which requires nine total coefficients per channel) since the data and computation requirements become large, and the L2 approximation is already pretty accurate. However use cases in physics and astronomy can require up to L3000!

We will return to the bands later, but for now let’s start with some definitions.

Truncated Weighted Sum

Given a basis B_i (\vec{\boldsymbol{\omega}}) , a spherical function R(\vec{\boldsymbol{\omega}}) can be written as a weighted sum:

    \[ R(\vec{\boldsymbol{\omega}}) = R_0 B_0(\vec{\boldsymbol{\omega}}) + R_1 B_1(\vec{\boldsymbol{\omega}}) + R_2 B_2(\vec{\boldsymbol{\omega}}) + \ldots \]

where R_0, R_1, \ldots are scalar coefficients.

Truncating the sum gives a finite approximation:

    \[ R_{tr}(\vec{\boldsymbol{\omega}}) \, = \, R_0 B_0(\vec{\boldsymbol{\omega}}) + \ldots + R_n B_n(\vec{\boldsymbol{\omega}}) \]

where the vector of coefficients (R_0, \ldots , R_n) fully describes the approximation.

Standard Inner Product

Recall that the standard definition of the inner product of spherical functions R, S is a spherical integral:

    \[ \langle R, S \rangle_{std} = \oint \, R(\vec{\boldsymbol{\omega}}) \, S(\vec{\boldsymbol{\omega}}) \, d\Omega \]

where d\Omega is the measure over the sphere and \vec{\boldsymbol{\omega}} is the variable of integration.

Therefore if the basis is orthogonal we compute the coefficients for a function with integrals:

    \[ R_i = \langle R, B_i \rangle_{std} = \oint \, R(\vec{\boldsymbol{\omega}}) \, B_i(\vec{\boldsymbol{\omega}}) \, d\Omega \quad \quad i = 0, 1, \ldots \]

Constant Basis Function

The standard definition of the first SH basis function — the single function in L0 — is:

    \[ Y_0^0 (\vec{\boldsymbol{\omega}}) = \frac{1}{\sqrt{4\pi}} \]

This defintion is motivated by ensuring normalisation over the sphere, that is:

    \[ \langle Y_0^0, Y_0^0 \rangle_{std} \, = \, \oint (Y_0^0)^2 \, d\Omega \, = \, \oint \frac{1}{4\pi} \, d\Omega \, = \, 1 \]

However for our use case this “extra” factor of \sqrt{4 \pi} is redundant. Consider what happens when we approximate a constant function R(\vec{\boldsymbol{\omega}}) = \alpha. First we compute the first SH coefficient:

    \[ R_0 = \oint R(\vec{\boldsymbol{\omega}}) Y_0^0 d\Omega =\oint \alpha \frac{1}{\sqrt{4 \pi}} d\Omega = \sqrt{4\pi} \alpha \]

Now to construct the SH approximation we form the weighted sum of basis functions, that is we multiply the coefficient and the basis function:

    \[ R_{sh}(\vec{\boldsymbol{\omega}}) = R_0 Y_0^0 = \frac{\sqrt{4\pi}\alpha }{\sqrt{4\pi}} = \alpha \]

As expected, the function is reconstructed exactly, but the process is clearly more complicated than necessary: we gain a factor of \sqrt{4 \pi} in the SH coefficient only to cancel it with the same factor in the basis function.

The factor of 4 \pi arises since it is the surface area of the unit sphere, but that isn’t relevant to our intended use case. It is much simpler to subsume this factor in a new definition of the inner product.

Normalised Inner Product

Redefine the inner product:

    \[ \langle R, S \rangle \, = \, \frac{1}{4\pi} \oint R(\vec{\boldsymbol{\omega}}) \, S(\vec{\boldsymbol{\omega}}) \, d\Omega \]

The idea is to “normalise so the surface area of the sphere is one.” With this new definition, for R(\vec{\boldsymbol{\omega}}) = \alpha we now obtain the more natural outcome that the corresponding SH coefficient R_0 = \alpha.

In fact this definition results in a 1-to-1 correspondence between the constructive approach of computing SH coefficients via sampling and the theoretical integral form:

    \[ R_0 = \frac{1}{n} \sum_{i=1}^{n} R(\vec{\boldsymbol{\omega}}_i) \quad \mapsto \quad R_0 = \frac{1}{4\pi} \oint R(\vec{\boldsymbol{\omega}}) \, d\Omega \]

where \vec{\boldsymbol{\omega}}_1, \ldots, \vec{\boldsymbol{\omega}}_n are samples taken from a uniform spherical distribution.

This means that to compute the ith SH coefficient we simply sum the values of the ith basis function for n given sample directions and divide the result by n.

Notation

We will abuse the notation B_l^* for the basis functions, where the subscript l is the band, and the superscript is an index within the band. This is the similar to the standard notation, except that we use B rather than Y to distinguish our renormalised basis functions.

L0 Band

With our renormalised inner product, we can define the first SH basis function:

    \[ B_0^0 (\vec{\boldsymbol{\omega}}) = 1 \]

This is the only basis function which has a non-zero average value over the sphere. This means that the first SH coefficient for a function represents the average energy over the sphere — the subsequent coefficients and basis functions neither add nor remove energy (instead they simply “move it around”).

L1 Band

Define:

    \[ B_1^{-1,0,1} = x, y, z \]

Note that we have chosen functions which are not unit-length with respect to our inner product. This is deliberate. We choose to take care of the normalisation in the reconstruction step — that is, when we compute the approximation R_{sh}. If the function we are measuring is radiance (incoming light) then for shading we need to convert it to irradiance (outgoing light) for a given normal direction, and the reconstruction coefficients will get swept into this conversion.

In fact, it is convenient to think of the L1 band as a single vector \vec{\boldsymbol{B}} _1 = (x, y, z) with corresponding SH coefficients \vec{\boldsymbol{R}}_1.

The direction of \vec{\boldsymbol{R}}_1 is the average direction of the function R. If R is radiance, then it is the average direction of the incoming light (weighted by intensity).

By construction, the length of \vec{\boldsymbol{R}}_1 will vary between zero and R_0, the first SH coefficient. The ratio |\vec{\boldsymbol{R}}_1| / R_0 is an indication of “how directional” the function is. If the ratio is one then R is completely directional — all of the energy is at a single point on the sphere. Conversely, if the ratio is zero then R is symmetrical, and the L1 band gives us no directional information at all.

L2 Band

There are six different quadratic combinations of the variables x, y, z:

    \[ x^2, y^2, z^2, xy, yz, xz \]

However, since we are on the surface of a 2-sphere we know that x^2 + y^2 + z^2 = 1. Therefore there is one linear dependency between these six functions, and we end up with five basis functions in L2.

The simplest way to capture the L2 band is a 3×3 matrix:

    \[ B_2^{ij} = \omega _i \omega _j - \tfrac{1}{3} \delta _{ij} \quad \quad \quad i, j \in \{1,2,3\} \]

with a corresponding matrix R_2^{ij} of SH coefficients, where \omega_{1,2,3} = x, y, z and \delta_{ij} is one if i = j and zero otherwise. The negative one-third term is required to ensure each function has zero integral over the sphere, so is orthogonal to the constant basis function.

This 3×3 matrix R_2^{ij} is symmetric, meaning that R_2^{ij} = R_2^{ji}, and traceless, meaning that the diagonal elements sum to zero: R_2^{00} + R_2^{11} + R_2^{33} = 0. Therefore it does indeed have five degrees of freedom (and an actual implementation would not store nine coefficients!)

Whereas L0 and L1 have relatively simple physical meanings, it is harder to grasp L2 intuitively. The linear L1 band is “antipodal” in the sense that if you add weight in the +x direction then it must be balanced by negative weight in the opposite -x direction. For the quadratic L2 band, the +x and -x directions are the same (since their squares are equal). Instead, to ensure a net-zero overall contribution, if weight is added to the x axis then the balancing negative weight is shared equally in the orthogonal y and z axes.

Although L2 is already a good approximation, in the case of CG lighting this property can have counter-intuitive effects: adding a light may cause “negative light” to appear in some orthogonal direction.

Reconstruction

With the above definitions, the SH approximation to the original function is:

    \[ R_{sh}(\vec{\boldsymbol{\omega}}) = R_0 + 3 \vec{\boldsymbol{R}}_1 \! \cdot \! \vec{\boldsymbol{\omega}} + \tfrac{15}{2}\omega_i \omega_j R_2^{ij} + \ldots \]

This formulation requires only simple rational constants applied in the reconstruction step.

Conclusion

Everything should be made as simple as possible, but no simpler.

Albert Einstein

We have shown how to redefine the SH basis functions for L0, L1 and L2 in a way that is simpler and more natural for many use cases. This definition removes all trigonometric functions, factors of \pi and square roots, and just leaves the rational constants required for normalisation. The process can be naturally extended to higher SH bands as required.

Although we’ve mentioned lighting a few times in this post, nothing here is lighting-specific. In the next post we’ll discuss a lighting problem: how best to convert from SH radiance (incoming light) to irradiance (outgoing light).