My point is that if you have 15,000 overdue flashcards then you're no longer doing SRS in any meaningful sense + that algorithmic intervals no longer matter; if you're getting too many cards to stay on schedule, whatever deck surgery we're performing to help you catch up is producing something that's no longer SRS.
That doesn't mean it's not a valid system, but I don't think it's logically consistent to demand all of your cards be reviewed on precisely scheduled intervals but then completely blow through those intervals; if you believe that it's important that a particular card be studied on a particular date then we should be doing everything we can to encourage you to review cards when they're due.
We actually are adding some 'manage overdue' options anyway in 4.0, but as with a lot of the stuff we're adding it's not something I would personally encourage anyone to use; if you've got that large a backlog then you probably ought to consider something other than algorithmic SRS.
The principle of our "Pleco New" system is actually that algorithmic SRS is pointless, something I've been saying in these forums for more than a decade now; as currently designed, "Pleco New" has:
- Fixed, graduated steps a la the old Leitner Box method for new / learning cards;
- Remaining reviews (up to your configured limit or until you stop) drawn randomly from old cards (but loosely prioritizing cards that haven't been reviewed for a while), without trying to fit any particular schedule or deeming any old card as 'due'
In reflection of the fact that our own experience + research papers we've read + common sense (you're encountering these words in other places) suggest that long-term vocabulary does not decay in anything like an exponential fashion, and thus that trying to review long-term cards on a specific algorithmic schedule is not a good use of your energy.
I'm actually not convinced that even short-term memory decays exponentially in a meaningful way, but clearly you do want to review cards more often when you're first attempting to acquire them and doing that on gradually increasing (but customizable) steps makes sense; however, there's a lot of wiggle room, and except for the first few short intervals we don't actually test you on all of the cards that are due today but rather on 1/7 of the cards that are due in the next 7 days (in order to distribute reviews more evenly).
But as evidenced by the complex chain of priorities above, this is
incredibly customizable; internally, there are a set of filters assigning different priorities to different cards, which determine whether/when they show up in a test (not done rigorously, but if you're reviewing 200 cards we pick the top 200 in priority order and randomize them) and can have all sorts of interesting limits applied to them (per day, per card, per session, % of cards in session, % of cards due at the start of the day, etc), be merged (for limits or even for priority sorting if they use the same sort fields) with other groups (even from different profiles!), etc - that plus very efficiently sorting large decks of cards into those filters are the two things we've spent the most engineering time for tests and I hope you'll be pleased with the results. With this same system we were able to very easily set up a test that mimicked Anki's behavior right down to using the exact same priority order (even little quirks like showing 'learning' cards with less than 1 day interval high in their order but 'learning' cards with greater than 1 day interval lower), and one that mimicked the peculiar sort orderings of our current flashcard system.