You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since spinlock is added back in section 5.2, the original content is
restored. Same as the rmw example, the goal is to provide easy to
understand example first and improve it later on.
Copy file name to clipboardExpand all lines: concurrency-primer.tex
+27-27Lines changed: 27 additions & 27 deletions
Original file line number
Diff line number
Diff line change
@@ -789,34 +789,34 @@ \section{Do we always need sequentially consistent operations?}
789
789
they inhibit optimizations that your compiler and hardware would otherwise make.
790
790
791
791
What if we could avoid some of this slowdown?
792
-
consider the example provided in \secref{rmw},
793
-
where an atomic pointer \monobox{prev} in \monobox{struct idle\_job} is assigned an address in function \monobox{thread\_pool\_init}:
794
-
\begin{ccode}
795
-
idle_job->job.args = NULL;
796
-
idle_job->job.next = &idle_job->job;
797
-
idle_job->job.prev = &idle_job->job;
798
-
idle_job->prev = &idle_job->job; /* assign to atomic pointer */
799
-
thrd_pool->func = worker;
800
-
thrd_pool->head = idle_job;
801
-
thrd_pool->state = idle;
802
-
thrd_pool->size = size;
803
-
\end{ccode}
804
-
An simple assignment on an atomic object is equivalent to \cc|atomic_store(A* obj , C desired)|.
805
-
In this case, statements above line 4 is guaranteed to happen before the atomic operation,
806
-
and the atomic operation is guaranteed to happen before statements below line 4.
807
-
However, this series of operations are filling fields in structures. They do not have data dependecies so they are not necessarily executed in some order.
792
+
Consider a simple case like the spinlock from \secref{spinlock}.
793
+
Between the \cc|lock()| and \cc|unlock()| calls,
794
+
we have a \introduce{critical section} where we can safely modify shared state protected by the lock.
795
+
Outside this critical section,
796
+
we only read and write to things that are not shared with other threads.
797
+
\begin{cppcode}
798
+
deepThought.calculate(); // non-shared
799
+
800
+
lock(); // Lock; critical section begins
801
+
sharedState.subject = "Life, the universe and everything";
802
+
sharedState.answer = 42;
803
+
unlock(); // Unlock; critical section ends
804
+
805
+
demolishEarth(vogons); // non-shared
806
+
\end{cppcode}
807
+
808
+
It is vital that reads and writes to shared memory do not move outside the critical section.
809
+
But the opposite is not true!
810
+
The compiler and hardware could move as much as they want \emph{into} the critical section without causing any trouble.
808
811
We have no problem with the following if it is somehow faster:
809
-
\begin{ccode}
810
-
idle_job->prev = &idle_job->job; /* assign to atomic pointer */
811
-
idle_job->job.args = NULL;
812
-
idle_job->job.next = &idle_job->job;
813
-
idle_job->job.prev = &idle_job->job;
814
-
thrd_pool->func = worker;
815
-
thrd_pool->head = idle_job;
816
-
thrd_pool->state = idle;
817
-
thrd_pool->size = size;
818
-
\end{ccode}
819
-
The compiler is free to reorder instructions and the befavior of \monobox{thread\_pool\_init} would remain the same.
812
+
\begin{cppcode}
813
+
lock(); // Lock; critical section begins
814
+
deepThought.calculate(); // non-shared
815
+
sharedState.subject = "Life, the universe and everything";
0 commit comments