Skip to content

Commit a1d1493

Browse files
Peter ZijlstraIngo Molnar
Peter Zijlstra
authored and
Ingo Molnar
committed
workqueue/lockdep: 'Fix' flush_work() annotation
The flush_work() annotation as introduced by commit: e159489 ("workqueue: relax lockdep annotation on flush_work()") hits on the lockdep problem with recursive read locks. The situation as described is: Work W1: Work W2: Task: ARR(Q) ARR(Q) flush_workqueue(Q) A(W1) A(W2) A(Q) flush_work(W2) R(Q) A(W2) R(W2) if (special) A(Q) else ARR(Q) R(Q) where: A - acquire, ARR - acquire-read-recursive, R - release. Where under 'special' conditions we want to trigger a lock recursion deadlock, but otherwise allow the flush_work(). The allowing is done by using recursive read locks (ARR), but lockdep is broken for recursive stuff. However, there appears to be no need to acquire the lock if we're not 'special', so if we remove the 'else' clause things become much simpler and no longer need the recursion thing at all. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Tejun Heo <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent e914985 commit a1d1493

File tree

1 file changed

+11
-9
lines changed

1 file changed

+11
-9
lines changed

kernel/workqueue.c

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2091,7 +2091,7 @@ __acquires(&pool->lock)
20912091

20922092
spin_unlock_irq(&pool->lock);
20932093

2094-
lock_map_acquire_read(&pwq->wq->lockdep_map);
2094+
lock_map_acquire(&pwq->wq->lockdep_map);
20952095
lock_map_acquire(&lockdep_map);
20962096
crossrelease_hist_start(XHLOCK_PROC);
20972097
trace_workqueue_execute_start(work);
@@ -2826,16 +2826,18 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
28262826
spin_unlock_irq(&pool->lock);
28272827

28282828
/*
2829-
* If @max_active is 1 or rescuer is in use, flushing another work
2830-
* item on the same workqueue may lead to deadlock. Make sure the
2831-
* flusher is not running on the same workqueue by verifying write
2832-
* access.
2829+
* Force a lock recursion deadlock when using flush_work() inside a
2830+
* single-threaded or rescuer equipped workqueue.
2831+
*
2832+
* For single threaded workqueues the deadlock happens when the work
2833+
* is after the work issuing the flush_work(). For rescuer equipped
2834+
* workqueues the deadlock happens when the rescuer stalls, blocking
2835+
* forward progress.
28332836
*/
2834-
if (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer)
2837+
if (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer) {
28352838
lock_map_acquire(&pwq->wq->lockdep_map);
2836-
else
2837-
lock_map_acquire_read(&pwq->wq->lockdep_map);
2838-
lock_map_release(&pwq->wq->lockdep_map);
2839+
lock_map_release(&pwq->wq->lockdep_map);
2840+
}
28392841

28402842
return true;
28412843
already_gone:

0 commit comments

Comments
 (0)