The original document can be downloaded
here.
As first homework of the class of concurrent proggraming we read these article, and these is a small sumary.
The computers are growing in these last days in a way that before were rare, these days the hardware is changing to a multicore processors but that the people didnt understand is tha it only benefit to a concurrent applications(that exist since 30 years). Today desktop applications will not run much faster than they do now. In fact they may run slightly slower on newer Chips, as individual cores become simper and run at lower clok speeds to reduco power consuption on dense multicore processors.
One of the conceptual problems in concurrent programing is that the concurrency requires programmers to think in a way humans find difficult, nevertheless, multicore machines are the future.
Consequences: a new era in software
Semaphores and coroutines are the assembler of concurrency, and locks and threads are the slightly higher-level structured constructs of concurrency. What is needed is OO (Object Oriented) for concurrency.
If you think that sequential programming is hard, concurrent programming is demostrably more difficult, becouse it requires synchronization analysis and other things.
Differences between client and sever applications
For many server-based programs, typically have an abundance of parallelism, as they simultaneously handle many independent request streams.
In client applications is not nearly as well structured and regular typically concurrency is found by dividing a computacion into finer pieces.
Programming models
parallel programming models differ significantly in two dimensions: granularity of the arallel operations and the degree of coupling between these tasks.
Parallel instruction execution generrally requires hardware support, Multicore processors reduce communicacion and synchronization costs, as compared with conventional multiprocessors, witch can reduce the overhead burden on smaller pieces of code.
The other dimension es he degree of coupling in the commuication and synchronization between the operations. The ideals is none: operations run entirely independently and produce distinct outputs.
Independent parallelism
es the parallelism in wich one or more operatiosn are applied independently to each item in a data collection.
Fine-granied data parallelism relies on the independence of the operations executed concurrently. They should no t share input data or results an dshould be executable without coordination.
Regular parallelism
Thw next step is to apply the same operation to a collection of data when the computations are mutually dependent
The problem of shared state, and why locks aren't the answer
When two tasks try to access the same object, and one could modify its state, if we do no thing to coordinate the tasks, we have a data race. Races are bad because the concurrent tasks can read and write inconsistent or corrupted values.
There are a rich variety of syncronization devices that can prevent races. The simplest of these is a lockk. Each task that wants to access a piece of shsred data must adquire the lock for that data, perform its computation, and then release the lock so other operations on the data can proceed.
A fundamental problem with locks is that they are not composable. You can't take two correct lock-based pieces of code, combine them, and know that the result is still correct.
in its simplest form deadlock happens when two locks might be adquired by two tasks in opposite order: task T1 takes lock L1, task T2 takes lock L2 and then T1 tries to take L2 thile T2 tries to take L1. Both block forever. There are some techniques for avoiding deadlocks, lock leveling and lock hierarchies..
There are at least three major problems with synchronized methods. First, they are no appropiate for types whose methods call virtual functions on other objets, synchronized methods can perform too much locking, by acquiring and releasing locks on all objects instances. Third, synchronized methods cna also perform too little locking, by no preservin atomicity when a program call multiple methods on a object ro on different objets.
...
...
...
Original document writed by Herb Sutter and James Larus can by downloaded from
here.