[ACCEPTED]-List of concurrency models-concurrency

Accepted answer
Score: 11

Actor Model

I heard of message passing where there is 13 no memory shared.

Is it about Erlang-style 12 Actors?

Scala uses this idea in its Actors 11 framework (thus, in Scala its not a part 10 of the language, just a library) and it 9 looks quite sexy!

In a few words Actors are 8 objects that have no shared data at all, but 7 can use async messages for interaction. Actors 6 can be located on one or different hosts 5 and use interesting error handling policy 4 (when error happened - actor just dies).

You 3 should read more on this in Erlang and Scala 2 docs, its really straightforward and progressive 1 approach!

Chapters 3, 17, 17.11:

http://www.scala-lang.org/sites/default/files/linuxsoft_archives/docu/files/ScalaByExample.pdf https://en.wikipedia.org/wiki/Actor_model

Score: 9

COM Threading (Concurrency) Model

  • Single-Threaded Apartments
  • Multi-Threaded Apartments
  • Mixed Model Development

COM objects can be used in multiple threads 39 of a process. The terms "Single- threaded Apartmen*t" (STA) and "*Multi-threaded Apartment" (MTA) are used 38 to create a conceptual framework for describing 37 the relationship between objects and threads, the concurrency 36 relationships among objects, the means 35 by which method calls are delivered to 34 an object, and the rules for passing interface pointers 33 among threads. Components and their clients 32 choose between the following two apartment 31 models presently supported by COM:

Single-threaded 30 Apartment model (STA): One or more threads 29 in a process use COM and calls to COM 28 objects are synchronized by COM. Interfaces 27 are marshaled between threads. A degenerate 26 case of the single-threaded apartment 25 model, where only one thread in a given 24 process uses COM, is called the single-threading 23 model. Previous Microsoft information 22 and documentation has sometimes referred to 21 the STA model simply as the "apartment 20 model." Multi-threaded Apartment 19 model (MTA): One or more threads use COM 18 and calls to COM objects associated with 17 the MTA are made directly by all threads associated 16 with the MTA without any interposition 15 of system code between caller and object. Because 14 multiple simultaneous clients may be calling objects 13 more or less simultaneously (simultaneously 12 on multi-processor systems), objects must 11 synchronize their internal state by themselves. Interfaces 10 are not marshaled between threads. Previous 9 Microsoft information and documentation 8 has sometimes referred to this model as the 7 "free-threaded model." Both the STA 6 model and the MTA model can be used in 5 the same process. This is sometimes referred 4 to as a "mixed-model" process.


Other models according to Wikipedia

There 3 are several models of concurrent computing, which 2 can be used to understand and analyze 1 concurrent systems. These models include:

Score: 7

Futures

A future is a place-holder for the undetermined 10 result of a (concurrent) computation. Once 9 the computation delivers a result, the associated future 8 is eliminated by globally replacing it with 7 the result value. That value may be a future 6 on its own.

Whenever a future is requested 5 by a concurrent computation, i.e. it tries to 4 access its value, that computation automatically 3 synchronizes on the future by blocking until 2 it becomes determined or failed.

There are 1 four kinds of futures:

  • concurrent futures stand for the result of a concurrent computation,
  • lazy futures stand for the result of a computation that is only performed on request,
  • promised futures stand for a value that is promised to be delivered later by explicit means,
  • failed futures represent the result of a computation that terminated with an exception.
Score: 5

Software transactional memory

In computer science, software transactional 17 memory (STM) is a concurrency control mechanism analogous 16 to database transactions for controlling 15 access to shared memory in concurrent computing. It 14 is an alternative to lock-based synchronization. A 13 transaction in this context is a piece of 12 code that executes a series of reads and 11 writes to shared memory. These reads and writes 10 logically occur at a single instant in time; intermediate 9 states are not visible to other (successful) transactions. The 8 idea of providing hardware support for transactions originated 7 in a 1986 paper and patent by Tom Knight[1]. The 6 idea was popularized by Maurice Herlihy 5 and J. Eliot B. Moss[2]. In 1995 Nir Shavit and 4 Dan Touitou extended this idea to software-only 3 transactional memory (STM)[3]. STM has recently 2 been the focus of intense research and support for 1 practical implementations is growing.

Score: 4

There's also map/reduce.

The idea is to spawn 10 many instances of a sub problem and to combine 9 the answers when they're done. A simple 8 example would be matrix multiplication, which 7 is the sum of several dot products. You 6 spawn a worker thread for each dot product, and 5 when all the threads are finished you sum 4 the result.

This is how GPUs, functional 3 languages such as LISP/Scheme/APL, and some 2 frameworks (Google's Map/Reduce) handle 1 concurrency.

Score: 2

Coroutines

In computer science, coroutines are program 6 components that generalize subroutines to 5 allow multiple entry points for suspending 4 and resuming execution at certain locations. Coroutines 3 are well-suited for implementing more familiar 2 program components such as cooperative tasks, iterators, infinite 1 lists and pipes.

Score: 1

There's also non-blocking concurrency such 11 as compare-and-swap and load-link/store-conditional 10 instructions. For example, compare-and-swap 9 (cas) could be defined as so:

bool cas( int 8 new_value, int current_value, int * location 7 );

This operation will then attempt to set 6 the value at location to the value passed 5 in new_value, but only if the value in location 4 is the same as current_value. This only 3 requires one instruction and is usually 2 how blocking concurrency (mutexes/semaphores/etc.) are 1 implemented.

Score: 1

IPC (including MPI and RMI)

Hi,
in the wiki pages you can find that 18 MPI (message passing interface) is a methods 17 of a general IPC technique: http://en.wikipedia.org/wiki/Inter-process_communication
Another interesting 16 approach is a Remote procedure call. For 15 example Java's RMI enables you to focus 14 only on your application domain and communication 13 patterns. It's an "application level" concurrency.
http://www.oracle.com/technetwork/java/javase/tech/index-jsp-136424.html

There 12 a various design patterns/tools available 11 to aid in shared memory model prallelization. Apart 10 from the mentioned futures one can also 9 take advantage of:
1. Thread pool pattern 8 - focuses on task distribution between fixed 7 number of threads: http://en.wikipedia.org/wiki/Thread_pool_pattern
2. Scheduler pattern 6 - controls the threads execution according 5 to a chosen scheduling policy http://en.wikipedia.org/wiki/Scheduler_pattern
3. Reactor 4 pattern - to embed a single threaded application 3 in a parallel environment http://en.wikipedia.org/wiki/Reactor_pattern
4. OpenMP (allows 2 to parallelize part of code by means of 1 preprocessor pragmas)

Regards,
Marcin

Score: 1

Parallel Random Access Machine (PRAM) is 4 useful for complexity/tractability isues 3 (please refer to a nice book for details).

About 2 models you will also find something here (by 1 Blaise Barney)

Score: 1

How about tuple space?

A tuple space is an implementation 13 of the associative memory paradigm for 12 parallel/distributed computing. It provides 11 a repository of tuples that can be accessed 10 concurrently. As an illustrative example, consider 9 that there are a group of processors that 8 produce pieces of data and a group of 7 processors that use the data. Producers 6 post their data as tuples in the space, and 5 the consumers then retrieve data from the 4 space that match a certain pattern. This 3 is also known as the blackboard metaphor. Tuple 2 space may be thought as a form of distributed 1 shared memory.

Score: 1

LMAX's disruptor pattern keeps data in place and assures only one 2 thread (consumer or producer) is owner of 1 a data item (=queue slot) at a time.

More Related questions