How do you make shared mutable state thread-safe in Swift?

Start with the race condition, then compare locks, semaphores, queues, barriers, and actors for protecting shared mutable state in real Swift code

Answer

The core idea

This problem usually appears with reference types or any other state that is shared across multiple threads or tasks. Once the same mutable state can be accessed from more than one execution context, the code must guarantee that those accesses are coordinated.

If that coordination is missing, the program can hit a data race. One thread may read a value while another thread is changing it, or two threads may both update the same value at once. That can lead to lost updates, inconsistent reads, and bugs that appear randomly under load.

The short interview answer is: shared mutable state, especially in reference types, becomes unsafe when multiple threads can access it without synchronization. Thread safety means putting that state behind a synchronization boundary so reads and writes cannot race with each other.

1. What goes wrong if it is not thread-safe

This counter looks harmless, but it is not safe if multiple threads call it at once:

swift
final class Counter {
    private var value = 0

    func increment() {
        value += 1 // ❌ Not atomic
    }

    func getValue() -> Int {
        value
    }
}

value += 1 is really a read, then a calculation, then a write. If two threads do that at the same time, they can both read the same old value and both write back the same result. One increment is lost.

Reads can also be part of the problem. If one thread reads while another writes, that is still unsynchronized access to shared mutable state. In interview terms, that is a data race.

2. NSLock: the simplest fix

For small critical sections, NSLock is often the most direct answer:

swift
final class CounterWithLock {
    private var value = 0
    private let lock = NSLock()

    func increment() {
        lock.lock()
        defer { lock.unlock() }
        value += 1
    }

    func getValue() -> Int {
        lock.lock()
        defer { lock.unlock() }
        return value
    }
}

Only one thread can be inside the protected section at a time. This is a good choice when the protected work is short and synchronous.

The main rule is practical: keep the critical section small, and do not hold the lock while doing slow work like networking, disk I/O, or long computations.

3. Mutexes: the lower-level version of the same idea

A mutex means mutual exclusion: one thread enters, others wait. NSLock is the most common Cocoa-level mutex wrapper, but you may also see lower-level primitives such as pthread_mutex.

swift
import Foundation

final class CounterWithMutex {
    private var value = 0
    private var mutex = pthread_mutex_t()

    init() {
        pthread_mutex_init(&mutex, nil)
    }

    deinit {
        pthread_mutex_destroy(&mutex)
    }

    func increment() {
        pthread_mutex_lock(&mutex)
        defer { pthread_mutex_unlock(&mutex) }
        value += 1
    }

    func getValue() -> Int {
        pthread_mutex_lock(&mutex)
        defer { pthread_mutex_unlock(&mutex) }
        return value
    }
}

In most app code, NSLock is usually used instead of a raw mutex because it is easier to read and maintain. Mention mutexes in interviews to show you understand the underlying idea, not because the lowest-level API is usually required.

4. DispatchSemaphore: another way to protect shared state

A semaphore can also protect shared mutable state by allowing only one thread into the critical section at a time:

swift
final class CounterWithSemaphore {
    private var value = 0
    private let semaphore = DispatchSemaphore(value: 1)

    func increment() {
        semaphore.wait()
        defer { semaphore.signal() }
        value += 1
    }

    func getValue() -> Int {
        semaphore.wait()
        defer { semaphore.signal() }
        return value
    }
}

This works much like a lock: wait() enters the protected section, and signal() leaves it.

For simple shared state, NSLock is usually the clearer tool. Still, this example is useful because it shows the same core idea: thread safety comes from putting all access behind one synchronization boundary.

Semaphores become more distinct when the count is greater than 1, because they can allow a limited number of threads to access a resource at the same time.

5. A serial DispatchQueue: clean and easy to reason about

Another good approach is to move all access onto one private serial queue:

swift
final class CounterWithSerialQueue {
    private var value = 0
    private let queue = DispatchQueue(label: "com.example.counter.serial")

    func increment() {
        queue.sync {
            value += 1
        }
    }

    func getValue() -> Int {
        queue.sync {
            value
        }
    }
}

This avoids manual locking and makes ownership very explicit: the queue is the synchronization boundary.

It is a strong answer when the code already uses GCD heavily. The main caution is not to call sync again from work that is already running on the same serial queue, because that will deadlock.

6. A concurrent queue with a barrier: good for read-heavy state

If reads are frequent and writes are less common, a concurrent queue with barrier writes can be a better fit:

swift
final class CounterWithBarrier {
    private var value = 0
    private let queue = DispatchQueue(
        label: "com.example.counter.concurrent",
        attributes: .concurrent
    )

    func increment() {
        queue.sync(flags: .barrier) {
            value += 1 // ✅ Exclusive write
        }
    }

    func getValue() -> Int {
        queue.sync {
            value
        }
    }
}

Here, multiple reads can run concurrently, but a write waits until earlier reads finish and blocks later reads until the update is done.

Two interview details matter:

  • barrier only gives special behavior on your own custom concurrent queue, not on a global queue
  • this pattern is most useful when you have many reads and relatively few writes

7. actor: the modern Swift answer

In new Swift concurrency code, an actor is usually the clearest high-level solution:

swift
actor CounterActor {
    private var value = 0

    func increment() {
        value += 1
    }

    func getValue() -> Int {
        value
    }
}

Task {
    let counter = CounterActor()
    await counter.increment()
    let current = await counter.getValue()
    print(current)
}

An actor protects its mutable state through actor isolation, so callers must cross that boundary with await.

That is why actors are such a strong interview answer: they are not just a locking technique, they are a language-level model for safe shared state. In new code, this is often the best default unless you are working inside older GCD or callback-based infrastructure.

8. Do you protect writes, reads, or both?

Always protect writes if multiple threads can write the same state.

Protect reads too when reads can overlap with writes. That is the common real-world case, and it is why locking only increment() is often not enough.

9. Why it matters in a real app

This is not just a theoretical bug:

  • counters, caches, and dictionaries can silently lose updates
  • app state can become inconsistent and hard to reproduce
  • crashes may appear only under load, on slow devices, or in production
  • code can look fine in review but fail randomly because scheduling is nondeterministic

That is why interviewers care about the reasoning, not just the API names.

Interview angle

A strong answer sounds like this: start by defining the race condition in the unsafe counter, then explain that shared mutable state needs one synchronization boundary. From there, compare the tools: NSLock for simple short critical sections, a mutex for lower-level control, a semaphore when you truly need semaphore behavior, a serial queue for clean GCD-based isolation, a barrier queue for many reads and fewer writes, and an actor as the modern Swift-first choice.