Queueing theory is a body of mathematics which predicts how work flowing through an organization will behave. Queueing theory is used to design phone systems, Internet networking, traffic control systems, and other types of systems which manage work in packets. Over the last 100 years, we’ve learned a lot about queueing theory and we know that it works.
The principles of queueing theory can be used to reliably improve or optimize the flow of work through an organization. It’s not guessing, because it’s based on science. The principles of queueing theory give us the tools which are used in various “agile” frameworks as well as in other fields such as manufacturing:
- Batch size
- WIP limits
- Pull management
- Cadence
- Job prioritization
Applying these principles and tools allow you to design a management framework or method which will optimize your system. Depending on the properties of the batches (evenly sized or not? interdependent or not? predictable or not?), different resulting management frameworks will be optimal for different organizations. The frameworks may look very different from each other, but they all derive from the same principles.
Luckily, it’s not necessary to do much actual math in order to benefit from queueing theory. You do need to understand the basic model involved – how to define a batch, and how to see your work flowing through your organization. Most of the principles can then be applied successfully by using common sense and approximation.
Your journey through queueing theory starts with the concept of a batch. We’ll explore batches in Achieve Agility with the Right Batch, and then see how to speed up the flow of batches in Get More Batches Done Faster.
If you’d like to know a bit of the history of these powerful tools first, try A Short History of Queueing Theory or What Makes Agile Work.
First revision published November 2020.