DFS and BFS Tradeoffs: In general, DFS will have lower memory (space) complexity than BFS since only the information about the states and transitions on the current path needs to be stored. This contrasts with BFS, where all the states and transitions that have so far been explored must be stored.
One disadvantage of DFS is that it can sometimes be bad for infinite or very large state spaces. A search that makes the wrong first transition choice can search for a long and indefinite period of time with no success. This is particularly true if a state has a successor that is also one of its ancestors and the model checker is not configured to check for visited states. The model checker easily can run out of memory before it gets to an answer.
With BFS, the model checker will not get trapped exploring a blind alley. It is guaranteed to find the error state with the shortest path from the root, since all parts of the state space are examined to level n before any states on level n+1 are examined. Furthermore, if there are multiple error states, the error state that requires a minimum number of transitions from the start state will be found first. This contrasts with DFS, which may find a long path to the error state in one part of the state space, when a shorter path exists in some other, unexplored part of the state space. If the number of neighboring states (also called branching factor) is too large, the DFS model checker may run out of memory before it can reach larger depths in the state space where the interesting states may be hiding. Of course, if the model checker is configured to store states regardless (for example to avoid checking the previously visited states again) then memory consumption would not be different in the two cases.