Other Protocols
The following protocols are available globally.
-
Abstracts over different concurrency abstractions.
Some environments have different concurrency abstractions, such as fundamental locks, threads and condition variables. This protocol allows writing code that is generic across the different concurrency environments.
These abstractions are designed to be relatively minimialistic in order to be easy to port to a variety of environments. Key environments include: macOS, Linux, Android, Windows, and Google’s internal enviornment.
See moreDeclaration
Swift
public protocol ConcurrencyPlatform
-
Represents a thread of execution.
See moreDeclaration
Swift
public protocol ThreadProtocol
-
Mutual exclusion locks.
See moreDeclaration
Swift
public protocol MutexProtocol
-
Allows for waiting until a given condition is satisifed.
See moreDeclaration
Swift
public protocol ConditionMutexProtocol : MutexProtocol
-
A condition variable.
Only perform operations on
See moreself
when holding the mutex associated withself
.Declaration
Swift
public protocol ConditionVariableProtocol
-
Abstracts over thread local storage.
See moreDeclaration
Swift
public protocol RawThreadLocalStorage
-
Allows efficient use of multi-core CPUs by managing a fixed-size collection of threads.
From first-principles, a (CPU) compute-bound application will run at peak performance when overheads are minimized. Once enough parallelism is exposed to leverage all cores, one of the key overheads to minimize is context switching, and thead creation / destruction. The optimal system configuration is thus a fixed-size threadpool where there is exactly one thread per CPU core (or rather, hyperthread). This configuration results in zero context switching, no additional kernel calls for thread creation & deletion, and full utilization of the hardware.
Unfortunately, in practice, it is infeasible to statically schedule work apriori onto a fixed pool of threads. Even when applying the same operation to a homogenous dataset, there will inevitably be variability in execution time. (This can arise from I/O interrupts taking over a core [briefly], or page faults, or even different latencies for memory access across NUMA domains.) As a result, it is important for peak performance to build abstractions that are flexible and dynamic in their work allocation.
The
ComputeThreadPool
protocol is a foundational API designed to enable efficient use of hardware resources. There are two APIs exposed to support two kinds of parallelism. For additional details, please see the documentation associated with each.Note: be sure to avoid executing code on the
ComputeThreadPool
that is not compute-bound. If you are doing I/O, be sure to use a dedicated threadpool, or use Swift NIO for high performance non-blocking I/O.Note: while there should be only one “physical” threadpool process-wide, there can be many virtual threadpools that compose on top of this one to allow configuration and tuning. (This is why
ComputeThreadPool
is a protocol and not static methods.) Examples of additional threadpool abstractions could include a separate threadpool per-NUMA domain, to support different priorities for tasks, or higher-level parallelism primitives such as “wait-groups”.See also
ComputeThreadPools
Declaration
Swift
public protocol ComputeThreadPool