Time complexity of queue operations

Queue ADT operations are usually O(1) per step for standard implementations. Costs change when you use a dynamic array (occasional O(n) resize) or a heap priority queue (O(log n) insert/remove).

On this page
  • Typical operations — enqueue, dequeue, peek
  • By implementation — array, circular, linked list, heap
  • Amortized analysis — dynamic array growth
  • Quick reference table

1) Operations we measure

Standard queue API (names may vary)
OperationMeaningNotes
enqueueAdd at rearMay fail if capacity fixed and full
dequeueRemove from frontUnderflow if empty
peek / frontRead front without removingO(1) when front index/pointer known

2) Complexity by implementation

Worst-case time (n = current size, N = capacity)
ImplementationEnqueueDequeuePeek
Linked list (front/rear)O(1)O(1)O(1)
Circular array (fixed N)O(1)O(1)O(1)
Linear array (shift elements)O(1)*O(n)O(1)
Dynamic circular arrayO(1) amortizedO(1) amortizedO(1)
Min-/max-heap priority queueO(log n)O(log n)O(1)

*Enqueue at end O(1) if unused slot; naive linear-array dequeue without circular indexing shifts O(n).

3) Amortized cost (dynamic array)

When the array doubles in size, copying takes O(n), but that happens rarely enough that the average cost per enqueue stays O(1) (aggregate over many pushes). Same idea applies to shrink-if-mostly-empty policies.

4) Quick reference

What to say in interviews
QuestionAnswer
Best queue for strict FIFO O(1)?Linked list or circular array.
Priority by value?Binary heap: push/pop in O(log n).
BFS on graph?Queue holds frontier; each vertex/edge handled once → O(V + E) total, not per queue op.

Related lessons

Key takeaway

State the implementation when you give Big-O: “queue” alone is ambiguous. Circular/linked queues are O(1); heaps add a log factor for priority.

Next up: Space complexity · BFS using queue · All queue topics