english Sivu saatavilla vain englanninkielisenä.

Blender Git Statistics -> Branches -> unlock_task_scheduler

"Unlock_task_scheduler" branch

Total commits : 9
Total committers : 1
First Commit : March 3, 2017
Latest Commit : March 3, 2017


Commits by Date

DateNumber of Commits
March 3, 20179

Committers

AuthorNumber of Commits
Bastien Montagne9

Popular Files

FilenameTotal Edits
task.c9
BLI_task.h3

Latest commits Feed

March 3, 2017, 16:42 (GMT)
Revert "Attempt to address nearly-starving cases."

This reverts commit 32959917ee112200125e3e742afb528fc2196072.

Definitively gives worse performances. Looks like every overhead we add
to task management is always worse than potential better scheduling it
might give us...
March 3, 2017, 16:42 (GMT)
Attempt to address nearly-starving cases.

Idea here is to reduce number of threads a pool is allowed to work on,
in case it does not get tasks quickly enough.

This does not seem to be really great result (have to only do the checks
once every 200 tasks pushed to avoid too much overhead), but I cannot
reproduce that nearly-starving case here so far. @sergey, curious if it
gives any difference on your 12cores with 14_03_G?
March 3, 2017, 16:42 (GMT)
Fix use-after-free concurrent issues.
March 3, 2017, 16:42 (GMT)
Never starve main thread (run_and_wait) from tasks!
March 3, 2017, 16:42 (GMT)
Cleanup, factorization, comments, and some fixes for potential issues.
March 3, 2017, 16:42 (GMT)
Inline all task-pushing helpers.

Those are small enough to be worth it, and it does give me ~2% speedup here...
March 3, 2017, 16:42 (GMT)
Attempt to address performances issues of task scheduler with lots of very small tasks.

This is partially based on Sergey's work from D2421, but pushing the things
a bit further. Basically:
- We keep a sheduler-counter of TODO tasks, which avoids us to do any
locking (even of the spinlock) when queue is empty, in workers.
- We spin/nanosleep a bit (less than a ms) when we cannot find a task,
before going into real condition-waiting sleep.
- We keep a counter of condition-sleeping threads, and only use
condition notifications in case we know some are waiting on it.

In other words, when no tasks are available, we spend a bit of time in a
rather high-activity but very cheap and totally lock-free loop, before
going into more expansive real condition-waiting sleep.

No noticeable speedup in complex production scene (barbershop one), here
master, D2421 and this code give roughly same performances (about 30%
slower in new than in old despgraph).

But with testfile from T50027 and new depsgraph, after initial bake,
with master I have ~14fps, with D2421 ~14.5fps, and with this code ~19.5fps.

Note that in theory, we could get completely rid of condition and stay
in the nanosleep loop, but this implies some rather high 'noise' (about
3% of CPU usage here with 8 cores), and going into condition-waiting
state after a few hundreds of micro-seconds does not give any measurable
slow down for me.

Also note that this code is only working on POSIX systems (so no Windows, not
sure how to do our nanosleeps on this OS :/ ).

Reviewers: sergey

Differential Revision: https://developer.blender.org/D2426
March 3, 2017, 16:42 (GMT)
Some minor changes from review.
March 3, 2017, 16:42 (GMT)
Do less nanosleep loops.

tested that already without much change yesterday, but for some reasons
today it gives me another 2-3% speedup in both test files.

And it should also mitigate the (supposed) almost-starving situation,
hopefully.

MiikaHweb - Blender Git Statistics v1.06
Tehnyt: Miika HämäläinenViimeksi päivitetty: 07.11.2014 14:18MiikaH:n Sivut a.k.a. MiikaHweb | 2003-2021