added module documentation to multithreading
This commit is contained in:
parent
43bd19643b
commit
898d878554
|
@ -1,3 +1,34 @@
|
||||||
|
//! This module provides the functionality to create a thread pool of fixed capacity.
|
||||||
|
//! This means that the pool can be used to dispatch functions or closures that will be executed
|
||||||
|
//! some time in the future each on its own thread. When dispatching jobs, the pool will test whether
|
||||||
|
//! threads are available. If so the pool will directly launch a new thread to run the supplied function.
|
||||||
|
//! In case no threads are available the job will be stalled for execution until a thread is free to run the first
|
||||||
|
//! stalled job.
|
||||||
|
//!
|
||||||
|
//! The pool will also keep track of all the handles that [`std::thread::spawn`] returns. Hence after executing a job
|
||||||
|
//! the pool still queries the result of the function which can be retrieved any time after the submission.
|
||||||
|
//! After retrieving the result of the function the handle is discarded and cannot be accessed again through the thread pool.
|
||||||
|
//!
|
||||||
|
//! # Threads
|
||||||
|
//! The maximum number of threads to be used can be specified when creating a new thread pool.
|
||||||
|
//! Alternatively the thread pool can be advised to automatically determine the recommend amount of threads to use.
|
||||||
|
//! Note that this has its limitations due to possible side effects of sandboxing, containerization or vms.
|
||||||
|
//! For further information see: [`thread::available_parallelism`]
|
||||||
|
//!
|
||||||
|
//! # Memory consumption over time
|
||||||
|
//! The pool will store the handle for every thread launched constantly increasing the memory consumption.
|
||||||
|
//! It should be noted that the pool won't perform any kind of cleanup of the stored handles, meaning it is recommended to either make regular calls to
|
||||||
|
//! `join_all` or `get_finished` in order to clear the vector of handles to avoid endless memory consumption.
|
||||||
|
//!
|
||||||
|
//! # Portability
|
||||||
|
//! This implementation is not fully platform independent. This is due to the usage of [`std::sync::atomic::AtomicUsize`].
|
||||||
|
//! This type is used to remove some locks from otherwise used [`std::sync::Mutex`] wrapping a [`usize`].
|
||||||
|
//! Note that atomic primitives are not available on all platforms but "can generally be relied upon existing"
|
||||||
|
//! (see: https://doc.rust-lang.org/std/sync/atomic/index.html).
|
||||||
|
//! Additionally this implementation relies on using the `load` and `store` operations
|
||||||
|
//! instead of using more comfortable ones like `fetch_add` in order to avoid unnecessary calls
|
||||||
|
//! to `unwrap` or `expected` from [`std::sync::MutexGuard`].
|
||||||
|
|
||||||
use std::{
|
use std::{
|
||||||
any::Any,
|
any::Any,
|
||||||
collections::VecDeque,
|
collections::VecDeque,
|
||||||
|
@ -209,7 +240,7 @@ where
|
||||||
|
|
||||||
/// Execute the supplied closure on a new thread
|
/// Execute the supplied closure on a new thread
|
||||||
/// and store the threads handle into `handles`. When the thread
|
/// and store the threads handle into `handles`. When the thread
|
||||||
/// finished executing the closure it will look for any closures left in `queue`
|
/// finished executing the closure it will look for any closures left in `queue` and
|
||||||
/// recursively execute it on a new thread. This method updates threads` in order to
|
/// recursively execute it on a new thread. This method updates threads` in order to
|
||||||
/// keep track of the number of active threads.
|
/// keep track of the number of active threads.
|
||||||
fn execute<F, T>(
|
fn execute<F, T>(
|
||||||
|
|
Loading…
Reference in New Issue