What concurrency problems should you know in Swift?

Distinguish race conditions, data races, priority inversion, deadlock, and actor reentrancy anomalies with short Swift examples and practical fixes

Answer

The core idea

Concurrent code usually fails in one of two ways:

  • safety bugs: the program reaches the wrong state because timing or shared mutable memory was not controlled
  • liveness bugs: work keeps waiting, retrying, or being delayed, so the system makes poor progress or no progress at all

The short interview answer is: a race condition is any bug where correctness depends on timing or ordering, a data race is the narrower case of unsynchronized concurrent access to the same mutable memory, and deadlock / priority inversion are progress problems rather than pure state-corruption problems. Actors help with shared state, but actor methods can still be re-entered after an await, so order still matters.

1. Race condition

A race condition means the result depends on which operation wins the timing race. The key point is broader than memory corruption: even if each individual operation is valid, the overall program can still be wrong because the required order was never enforced.

swift
import Foundation

final class BankAccount {
    private var balance = 100
    private let queue = DispatchQueue(label: "bank.account.serial")

    func canWithdraw(_ amount: Int) -> Bool {
        queue.sync {
            balance >= amount
        }
    }

    func withdraw(_ amount: Int) {
        queue.sync {
            balance -= amount
        }
    }

    func currentBalance() -> Int {
        queue.sync {
            balance
        }
    }
}

let account = BankAccount()
let group = DispatchGroup()

for _ in 0..<2 {
    group.enter()
    DispatchQueue.global().async {
        if account.canWithdraw(80) {
            Thread.sleep(forTimeInterval: 0.1)
            account.withdraw(80)
        }
        group.leave()
    }
}

group.wait()
print(account.currentBalance()) // ❌ Can become -60

That is a race condition because the code performs a check-then-act sequence in two separate steps. Each individual balance access is synchronized through a private serial queue, so there is no raw shared-memory corruption. The bug is that both threads can observe balance == 100 before either withdrawal runs.

This example is intentionally different from a data race. The state access itself is synchronized, but the overall operation is still not atomic. The final result depends on timing, so the business rule is wrong even though the reads and writes are individually protected.

Good interview wording is: every data race is a race condition, but not every race condition is a data race. The fix here is to make the whole withdrawal one atomic operation, not just to protect the getter-like pieces separately.

2. Data race

A data race is the more specific bug: multiple threads or tasks touch the same mutable memory at the same time, and at least one access is a write.

swift
import Foundation

var counter = 0

DispatchQueue.global().async {
    counter += 1 // Thread A writes
}

DispatchQueue.global().async {
    counter += 1 // Thread B writes the same memory without synchronization
}

counter += 1 is not one indivisible operation. It is a read, then a calculation, then a write. If two threads interleave those steps, updates are lost.

This example is intentionally different from the race-condition example above. Here the bug is not a stale business-rule check. The bug is that two threads are touching the same memory location without synchronization.

One subtle interview detail matters here: locking only a computed property getter and setter is still not enough for counter += 1, because += is a read-modify-write sequence. The whole mutation must happen under one synchronization boundary.

The expected fix is to put the state behind one synchronization boundary: an actor, a private serial queue, or a lock around a full mutation method such as increment().

3. Priority inversion

Priority inversion happens when low-priority work holds a resource that high-priority work needs. The high-priority task cannot proceed, so the system behaves as if the lower-priority task temporarily became more important.

swift
import Foundation

let low = DispatchQueue.global(qos: .background)
let high = DispatchQueue.global(qos: .userInteractive)
let lock = NSLock()

low.async {
    lock.lock()
    defer { lock.unlock() }

    Thread.sleep(forTimeInterval: 2)
    print("Low-priority task finished")
}

high.async {
    lock.lock() // ❌ UI-critical work waits for background work
    defer { lock.unlock() }

    print("High-priority task finally ran")
}

If medium-priority work keeps preempting the low-priority task, the delay gets worse. That is why interview answers often mention starvation nearby: the high-priority task is ready, but it still cannot make progress because the resource owner is delayed.

The fix is to avoid long lock holds, avoid making high-priority work wait on low-priority work, and redesign ownership so critical paths do less blocking.

4. Deadlock

A deadlock means two execution contexts wait forever because each one needs the other to make progress first.

swift
import Foundation
import UIKit

final class ProfileViewController: UIViewController {
    @IBOutlet private weak var statusLabel: UILabel!

    @IBAction private func refreshButtonTapped(_ sender: UIButton) {
        showLoadingState()
    }

    private func showLoadingState() {
        DispatchQueue.main.sync {
            statusLabel.text = "Loading..." // ❌ Deadlock if called from the main thread
        }
    }
}

This is more realistic in iOS because button actions already run on the main thread. refreshButtonTapped calls showLoadingState(), and that helper tries to synchronously dispatch back onto the same main queue. The queued block cannot start until the current main-thread work finishes, but the current work is waiting for that block to finish first. The app freezes.

The classic fix is simple: never call sync onto the same serial queue you are already running on. For UI work, if you are already on the main thread, update the UI directly. If you might be off-main, use DispatchQueue.main.async instead of sync.

5. Actor reentrancy anomaly

Actors protect mutable state, but they are reentrant across suspension points. Once an actor method hits await, other work may enter the same actor before the original method resumes.

swift
actor Inventory {
    private var stock = 1

    func reserveOne() async -> Bool {
        guard stock > 0 else { return false }

        await Task.yield()

        stock -= 1 // ❌ Another call may have changed stock while we were suspended
        return true
    }
}

This is the actor version of the order does matter. The method checked an invariant before await, but it used that old assumption after await.

A good fix is to either mutate before suspension or re-check the invariant after resuming:

swift
actor Inventory {
    private var stock = 1

    func reserveOne() async -> Bool {
        guard stock > 0 else { return false }

        await Task.yield()

        guard stock > 0 else { return false } // ✅ Re-validate after suspension
        stock -= 1
        return true
    }
}

Good interview wording is: actors remove ordinary data races on isolated state, but they do not guarantee that a long actor method runs atomically across await.

6. How to avoid these problems

  • Prefer immutable values and message passing over shared mutable state.
  • Put shared mutable state behind one owner: an actor, a private serial queue, or a lock.
  • Keep critical sections short, and never hold a lock across long waits, blocking I/O, or callback chains.
  • Do not call sync onto the same serial queue, and keep lock ordering consistent.
  • Inside actors, treat every await as a boundary where assumptions may become stale.

7. Why it matters

These are not theoretical interview-only bugs:

  • race conditions and data races create stale UI, lost writes, and random production-only failures
  • priority inversion hurts responsiveness, especially on user-interactive paths
  • deadlock freezes the app completely
  • livelock burns CPU while still making no progress
  • actor reentrancy bugs are subtle because the code looks isolated and safe at first glance

8. How to debug it

Keep the debugging story practical:

  1. Use Thread Sanitizer for shared-memory bugs such as data races.
  2. Pause the app and inspect blocked stack traces when you suspect deadlock or long lock waits.
  3. Add logs or signposts before and after lock, sync, and await points so you can see the real order of events.

Interview angle

A strong answer starts by separating race condition from data race. Then move to the liveness problems: priority inversion delays important work, and deadlock stops progress completely. Finish with the modern Swift point: actors solve a lot of shared-state bugs, but actor methods are reentrant across await, so invariants must be re-checked after suspension.

For the deeper follow-up on how to protect shared mutable state with NSLock, serial queues, barriers, semaphores, and actors, see How do you make shared mutable state thread-safe in Swift?.