Other Classes
The following classes are available globally.
-
An efficient, work-stealing, general purpose compute thread pool.
NonBlockingThreadPool
can be cleaned up by callingshutDown()
which will block until all threads in the threadpool have exited. IfshutDown()
is never called,NonBlockingThreadPool
will never be deallocated.NonBlockingThreadPool uses atomics to implement a non-blocking thread pool. During normal execution, no locks are acquired or released. This can result in efficient parallelism across many cores.
NonBlockingThreadPool
is designed to scale from laptops to high-core-count servers. Although the thread pool size can be manually tuned, often the most efficient configuration is a one-to-one mapping between hardware threads and worker threads, as this allows full use of the hardware while avoiding unnecessary context switches. I/O heavy workloads may want to reduce the thread pool count to dedicate a core or two to I/O processing.Each thread managed by this thread pool maintains its own fixed-size pending task queue. The workers loop, trying to get their next tasks from their own queue first, and if that queue is empty, the worker tries to steal work from the pending task queues of other threads in the pool.
NonBlockingThreadPool
implements important optimizations based on the calling thread. There are key fast-paths taken when calling functions onNonBlockingThreadPool
from threads that have been registered with the pool (or from threads managed by the pool itself). In order to help users build performant applications,NonBlockingThreadPool
will trap (and exit the process) if functions are called on it from non-fast-path’d threads by default. You can change this behavior by settingallowNonFastPathThreads: true
at initialization.In order to avoid wasting excessive CPU cycles, the worker threads managed by
NonBlockingThreadPool
will suspend themselves (using locks to inform the host kernel).NonBlockingThreadPool
is parameterized by an environment, which allows this thread pool to seamlessly interoperate within a larger application by reusing its concurrency primitives (such as locks and condition variables, which are used for thread parking), as well as even allowing a custom thread allocator.Local tasks typically execute in LIFO order, which is often optimal for cache locality of compute intensive tasks. Other threads attempt to steal work “FIFO”-style, which admits an efficient (dynamic) schedule for typical divide-and-conquor algorithms.
This implementation is inspired by the Eigen thread pool library, TFRT, as well as:
See more"Thread Scheduling for Multiprogrammed Multiprocessors" Nimar S. Arora, Robert D. Blumofe, C. Greg Plaxton
Declaration
Swift
public class NonBlockingThreadPool<Environment> : ComputeThreadPool where Environment : ConcurrencyPlatform
-
Allows to wait for arbitrary predicates in non-blocking algorithms.
You can think of
NonblockingCondition
as a condition variable, but the predicate to wait upon does not need to be protected by a mutex. UsingNonblockingCondition
in a non-blocking algorithm (instead of spinning) allows threads to go sleep, saving potentially significant amounts of CPU resources.To use
NonblockingCondition
, your algorithm should look like the following:let nbc = NonblockingCondition(...) // Waiting thread: if predicate { return doWork() } nbc.preWait(threadId) if predicate { nbc.cancelWait(threadId) return doWork() } nbc.commitWait(threadId) // Puts current thread to sleep until notified. // Notifying thread: predicate = true nbc.notify() // or nbc.notifyAll()
Notifying is cheap if there are no waiting threads. preWait and commitWait are not cheap, but they should only be executed if the preceding predicate check failed. This yields an efficient system in the general case where there is low contention.
See moreDeclaration
Swift
final public class NonblockingCondition<Environment> where Environment : ConcurrencyPlatform
extension NonblockingCondition: CustomDebugStringConvertible