Sivu saatavilla vain englanninkielisenä.
MiikaHweb - Blender Git Statistics v1.06
Blender Git Statistics -> Branches -> pygpu_extensions
"Pygpu_extensions" branch
Total commits : 58
Total committers : 15
First Commit : February 11, 2021
Latest Commit : February 12, 2021
Commits by Date
Date | Number of Commits | |
---|---|---|
February 12, 2021 | 51 | |
February 11, 2021 | 7 |
Committers
Popular Files
Filename | Total Edits |
---|---|
gpu_py_framebuffer.c | 9 |
gpu_py_texture.c | 7 |
gpu_py_types.c | 5 |
rna_nodetree.c | 4 |
CMakeLists.txt | 4 |
gpu_py_types.h | 4 |
gpu_py_state.c | 3 |
mesh_test.py | 3 |
gpu_py_shader.c | 3 |
node_geo_attribute_randomize.cc | 3 |
Latest commits
February 12, 2021, 21:55 (GMT) |
Comment the methods of free a GPU object (except offscreen) |
February 12, 2021, 21:55 (GMT) |
Buffer: Correct wrong error message in buffer creation |
February 12, 2021, 21:55 (GMT) |
GPUFrameBuffer: redo framebuffer initialization: Now attachments can be passed as a texture or dictionary containing the texture, layer and mip |
February 12, 2021, 21:55 (GMT) |
GPUTexture: Allow 'size' to be passed with any type of sequence object |
February 12, 2021, 21:54 (GMT) |
EEVEE: Depth of field: New implementation This is a complete refactor over the old system. The goal was to increase quality first and then have something more flexible and optimised. |{F9603145} | {F9603142}|{F9603147}| This fixes issues we had with the old system which were: - Too much overdraw (low performance). - Not enough precision in render targets (hugly color banding/drifting). - Poor resolution near in-focus regions. - Wrong support of orthographic views. - Missing alpha support in viewport. - Missing bokeh shape inversion on foreground field. - Issues on some GPUs. (see T72489) (But I'm sure this one will have other issues as well heh...) - Fix T81092 I chose Unreal's Diaphragm DOF as a reference / goal implementation. It is well described in the presentation "A Life of a Bokeh" by Guillaume Abadie. You can check about it here https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04 Along side the main implementation we provide a way to increase the quality by jittering the camera position for each sample (the ones specified under the Sampling tab). The jittering is dividing the actual post processing dof radius so that it fills the undersampling. The user can still add more overblur to have a noiseless image, but reducing bokeh shape sharpness. Effect of overblur (left without, right with): | {F9603122} | {F9603123}| The actual implementation differs a bit: - Foreground gather implementation uses the same "ring binning" accumulator as background but uses a custom occlusion method. This gives the problem of inflating the foreground elements when they are over background or in-focus regions. This is was a hard decision but this was preferable to the other method that was giving poor opacity masks for foreground and had other more noticeable issues. Do note it is possible to improve this part in the future if a better alternative is found. - Use occlusion texture for foreground. Presentation says it wasn't really needed for them. - The TAA stabilisation pass is replace by a simple neighborhood clamping at the reduce copy stage for simplicity. - We don't do a brute-force in-focus separate gather pass. Instead we just do the brute force pass during resolve. Using the separate pass could be a future optimization if needed but might give less precise results. - We don't use compute shaders at all so shader branching might not be optimal. But performance is still way better than our previous implementation. - We mainly rely on density change to fix all undersampling issues even for foreground (which is something the reference implementation is not doing strangely). Remaining issues (not considered blocking for me): - Slight defocus stability: Due to slight defocus bruteforce gather using the bare scene color, highlights are dilated and make convergence quite slow or imposible when using jittered DOF (or gives ) - ~~Slight defocus inflating: There seems to be a 1px inflation discontinuity of the slight focus convolution compared to the half resolution. This is not really noticeable if using jittered camera.~~ Fixed - Foreground occlusion approximation is a bit glitchy and gives incorrect result if the a defocus foreground element overlaps a farther foreground element. Note that this is easily mitigated using the jittered camera position. |{F9603114}|{F9603115}|{F9603116}| - Foreground is inflating, not revealing background. However this avoids some other bugs too as discussed previously. Also mitigated with jittered camera position. |{F9603130}|{F9603129}| - Sensor vertical fit is still broken (does not match cycles). - Scattred bokeh shapes can be a bit strange at polygon vertices. This is due to the distance field stored in the Bokeh LUT which is not rounded at the edges. This is barely noticeable if the shape does not rotate. - ~~Sampling pattern of the jittered camera position is suboptimal. Could try something like hammersley or poisson disc distribution.~~Used hexaweb sampling pattern which is not random but has better stability and overall coverage. - Very large bokeh (> 300 px) can exhibit undersampling artifact in gather pass and quite a bit of bleeding. But at this size it is preferable to use jittered camera position. Codewise the changes are pretty much self contained and each pass are well documented. However the whole pipeline is quite complex to understand from bird's-eye view. Notes: - There is the possibility of using arbitrary bokeh texture with this implementation. However implementation is a bit involved. - Gathering max sample count is hardcoded to avoid to deal with shader variations. The actual max sample count is already quite high but samples are not evenly distributed due to the ring binning method. - While this implementation does not need 32bit/channel textures to render correctly it does use many other textures so actual VRAM usage is higher than previous method for viewport but less for render. Textures are reused to avoid many allocations. - Bokeh LUT computation is fast and done for each redraw because it can be animated. Also the texture can be shared with other viewport with different camera settings. |
February 12, 2021, 21:54 (GMT) |
Geometry Nodes: Add dependency relation for collection objects Currently moving or changing an object references in a node modifier's node group does not trigger re-evaluation. Because there is no collection relation in the dependency graph, we must add the relation to all objects in the collection individually. |
February 12, 2021, 21:54 (GMT) |
Geometry Nodes: Add operation setting to attribute randomize node This commit adds a drop-down to the attribute randomize node to support a few operations on the values of existing attributes: "Replace/Create" (the existing behavior), "Add", "Subtract", and "Multiply". At this point, the operations are limited by what is simple to implement. More could be added in the future, but there isn't a strong use case for more complex operations anyway, and a second math node can be used. Differential Revision: https://developer.blender.org/D10269 |
February 12, 2021, 21:54 (GMT) |
Cleanup: Decrease scope of RNA enum definitions Since these enums are only used in a single function, they can be defined where they are used. Similar to rB2f60e5d1b56dfb8c9104. |
February 12, 2021, 21:54 (GMT) |
Py Doc: Fix rst syntax errors |
February 12, 2021, 21:54 (GMT) |
Geometry Nodes: Make instances real on-demand This commit makes the geometry output of the collection info usable. The output is the geometry of a collection instance, but this commit adds a utility to convert the instances to real geometry, used in the background whenever it is needed, like copy on write. The recursive nature of the "realize instances" code is essential, because collection instances in the `InstancesComponent`, might have no geometry sets of their own containing even more collection instances, which might then contain object instances, etc. Another consideration is that currently, every single instance contains a reference to its data. This is inefficient since most of the time there are many locations and only a few sets of unique data. So this commit adds a `GeometryInstanceGroup` to support this future optimization. The API for instances returns a vector of `GeometryInstanceGroup`. This may be less efficient when there are many instances, but it makes more complicated operations like point distribution that need to iterate over input geometry multiple times much simpler. Any code that needs to change data, like most of the attribute nodes, can simply call `geometry_set_realize_instances(geometry_set)`, which will move any geometry in the `InstancesComponent` to new "real" geometry components. Many nodes can support read-only access to instances in order to avoid making them real, this will be addressed where needed in the near future. Instances from the existing "dupli" system are not supported yet. Differential Revision: https://developer.blender.org/D10327 |
February 12, 2021, 21:54 (GMT) |
OpenColorIO: remove default display workaround A fix for this is in 2.0 (and recent 1.1.x versions), no need for this anymore. Differential Revision: https://developer.blender.org/D10275 |
February 12, 2021, 21:54 (GMT) |
Py Doc: Update Sphinx and theme versions |
February 12, 2021, 21:54 (GMT) |
Fix T85562: Remove Win32 RIM_INPUTSINK Removal of Win32 code that allows background windows to receive raw input. Differential Revision: https://developer.blender.org/D10408 Reviewed by Brecht Van Lommel |
February 12, 2021, 21:54 (GMT) |
Geometry Nodes: Allow attribute nodes to use different domains Currently every attribute node assumes that the attribute exists on the "points" domain, so it generally isn't possible to work with attributes on other domains like edges, polygons, and corners. This commit adds a heuristic to each attribute node to determine the correct domain for the result attribute. In general, it works like this: - If the output attribute already exists, use that domain. - Otherwise, use the highest priority domain of the input attributes. - If none of the inputs are attributes, use the default domain (points). For the implementation I abstracted the check a bit, but in each node has a slightly different situation, so we end up with slightly different `get_result_domain` functions in each node. I think this makes sense, it keeps the code flexible and more easily understandable. Note that we might eventually want to expose a domain drop-down to some of the nodes. But that will be a separate discussion; this commit focuses on making a more useful choice automatically. Differential Revision: https://developer.blender.org/D10389 |
February 12, 2021, 21:54 (GMT) |
GPencil: Fix compiler warnings after previous commit These warnings were not detected by Windows compiler as the Linux compiler does. |
February 12, 2021, 21:54 (GMT) |
Py Doc: Delete old deployment scripts Now, the API docs are deployed via the new devops pipeline developed by James. |
February 12, 2021, 21:54 (GMT) |
OpenColorIO: upgrade to version 2.0.0 Ref T84819 Build System ============ This is an API breaking new version, and the updated code only builds with OpenColorIO 2.0 and later. Adding backwards compatibility was too complicated. * Tinyxml was replaced with Expat, adding a new dependency. * Yaml-cpp is now built as a dependency on Unix, as was already done on Windows. * Removed currently unused LCMS code. * Pystring remains built as part of OCIO itself, since it has no good build system. * Linux and macOS check for the OpenColorIO verison, and disable it if too old. Ref D10270 Processors and Transforms ========================= CPU processors now need to be created to do CPU processing. These are cached internally, but the cache lookup is not fast enough to execute per pixel or texture sample, so for performance these are now also exposed in the C API. The C API for transforms will no longer be needed afer all changes, so remove it to simplify the API and fallback implementation. Ref D10271 Display Transforms ================== Needs a bit more manual work constructing the transform. LegacyViewingPipeline could also have been used, but isn't really any simpler and since it's legacy we better not rely on it. We moved more logic into the opencolorio module, to simplify the API. There is no need to wrap a dozen functions just to be able to do this in C rather than C++. It's also tightly coupled to the GPU shader logic, and so should be in the same module. Ref D10271 GPU Display Shader ================== To avoid baking exposure and gamma into the GLSL shader and requiring slow recompiles when tweaking, we manually apply them in the shader. This leads to some logic duplicaton between the CPU and GPU display processor, but it seems unavoidable. Caching was also changed. Previously this was done both on the imbuf and opencolorio module levels. Now it's all done in the opencolorio module by simply matching color space names. We no longer use cacheIDs from OpenColorIO since computing them is expensive, and they are unlikely to match now that more is baked into the shader code. Shaders can now use multiple 2D textures, 3D textures and uniforms, rather than a single 3D texture. So allocating and binding those adds some code. Color space conversions for blending with overlays is now hardcoded in the shader. This was using harcoded numbers anyway, if this every becomes a general OpenColorIO transform it can be changed, but for now there is no point to add code complexity. Ref D10273 CIE XYZ ======= We need standard CIE XYZ values for rendering effects like blackbody emission. The relation to the scene linear role is based on OpenColorIO configuration. In OpenColorIO 2.0 configs roles can no longer have the same name as color spaces, which means our XYZ role and colorspace in the configuration give an error. Instead use the new standard aces_interchange role, which relates scene linear to a known scene referred color space. Compatibility with the old XYZ role is preserved, if the configuration file has no conflicting names. Also includes a non-functional change to the configuraton file to use an XYZ-to-ACES matrix instead of REC709-to-ACES, makes debugging a little easier since the matrix is the same one we have in the code now and that is also found easily in the ACES specs. Ref D10274 |
February 12, 2021, 21:54 (GMT) |
Fix T84899: instance ids are not unique in common cases Ids stored in the `id` attribute cannot be assumed to be unique. While they might be unique in some cases, this is not something that can be guaranteed in general. For some use cases (e.g. generating "stable randomness" on points) uniqueness is not important. To support features like motion blur, unique ids are important though. This patch implements a simple algorithm that turns non-unique ids into unique ones. It might fail to do so under very unlikely circumstances, in which it returns non-unique ids instead of possibly going into an endless loop. Here are some requirements I set for the algorithm: * Ids that are unique already, must not be changed. * The same input should generate the same output. * Handle cases when all ids are different and when all ids are the same equally well (in expected linear time). * Small changes in the input id array should ideally only have a small impact on the output id array. The reported bug happened because cycles found multiple objects with the same id and thought that it was a single object that moved on every check. Differential Revision: https://developer.blender.org/D10402 |
February 12, 2021, 18:23 (GMT) |
Update texture creation: Dimensions are now passed as a tuple or int |
February 12, 2021, 16:22 (GMT) |
Remove limitation of cubemap layers to be multiple of 6 |
MiikaHweb - Blender Git Statistics v1.06