Blender Git Loki
Git Commits -> Revision e1df90a
August 21, 2016, 04:06 (GMT) |
Cycles: Use the correct bias and variance models for the least-squares fit and global bandwidth optimization The approach that is used to find the global bandwidth is: - Run the reconstruction filter for different bandwidths and estimate bias and variance - Fit analytic bias and variance models to these bandwidth-bias/variance pairs using least-squares - Minimize the MSE term (Bias^2 + Variance) analytically using the fitted models The models used in the LWR paper are: - Bias(h) = a + b*h^2 - Variance(h) = (c + d*h^(-k))/n , where (a, b, c, d) are the parameters to be fitted, h is the global bandwidth, k is the rank and n is the number of samples. Classic linear least squares is used to find a, b, c and d. Then, the paper states that MSE(h) = (Bias(h)^2 + Variance(h)) is minimal for h = (k*d / (4*b^2*n))^(1/(k+4)). Now, what is suspicious about this term is that a and c don't appear. c makes sense - after all, its contribution to the variance is independent of h. a, however, does not - after all, the Bias term is squared, so a term that depends on both h and a exists. It turns out that this minimization term is wrong for these models, but instead correct when using Bias(h) = b*h^2 (without constant offset). That model also makes intuitive sense, since the bias goes to zero as filter strength (bandwidth) does so. Similarly, the variance model should go to zero as h goes towards infinity, since infinite filter strength would eliminate all possible noise. Therefore, this commit changes the bias and variance models to not include the constant term any more. The change in result can be significant - in my test scene, the average bandwidth halved. |
Commit Details:
Full Hash: e1df90ad39194831c5d453453bfe68a55121e250
Parent Commit: fba2b77
Lines Changed: +33, -52