Consider a large web-based database. In some sense, Google is sort of like this. There might be many users who want to read from the database, but only a few users who are allowed to write to the database. If we use standard locks to control access to the database, the application will be much slower than it could be with a more clever type of lock.
Suppose we have one object that is shared among several threads. Suppose also that each thread is either a reader or a writer. Readers only read data but never modify it, while writers read and modify data. If we know which threads are reading and which ones are writing, what can we do to increase concurrency?
First, we have to prevent two writers from writing at the same time. In addition, a reader cannot read while a writer is writing. There is no problem, however in allowing lots of readers to read at the same time. Read-write locks achieve this, and can greatly improve performance for this sort of application.
Note that the lock is the same for both readers andwriters (called 'rw' in the slides), but the readers use an rlock() on the lock and writers use a wlock() on the same lock. Writers requesting a wlock() will have to wait until all readers and writers have released the lock. Readers requesting a rlock() can acquire it even if other readers are holding the lock. Readers however will still have to wait until any writer holding the lock releases the lock.