Thanks.
"Repeat incorrect" actually should already be matching the most recent test type for a card - is it not doing that now?
Otherwise, most of this would be achievable in 4.0 with some "Expert Mode" tweaks; you could create a category for "don't test listening" cards and then set your listening test type to exclude cards in that category. (and likewise for multi-choice) Or as an alternative you could create a 'hint' field to display in listening tests to help disambiguate homonyms. We also support running tests on multiple profiles simultaneously - 'meta profiles' - which would be a way to keep independent track on how you were doing with different sides of different cards. (this is how our new Anki emulation mode works, each 'side' has its own private hidden test profile)
The default behavior in 4.0 - our new built in "Learning" test type - has a set of fixed (customizable) intervals for a card to initially become 'learned' but then after that simply cycles through all of your 'review' cards without any particular attempt at optimizing intervals; we don't believe that for language learning, long term memory actually decays in the sort of exponential fashion that would make SRS worthwhile. (and it has some negative side effects, most importantly people's review queues getting too backlogged) So the distinctions between test types become less important with that, since all that an incorrect answer is doing is causing you to go back and repeat the card on that same type a few times before it once again returns to the review pool.
But if a lot of people start making similar requests to yours we can always consider making a more 'managed' take on Alternating a more official / easy-to-access option, all of this stuff is basically just a question of which particular JSON configuration files we ship built into the app
"Repeat incorrect" actually should already be matching the most recent test type for a card - is it not doing that now?
Otherwise, most of this would be achievable in 4.0 with some "Expert Mode" tweaks; you could create a category for "don't test listening" cards and then set your listening test type to exclude cards in that category. (and likewise for multi-choice) Or as an alternative you could create a 'hint' field to display in listening tests to help disambiguate homonyms. We also support running tests on multiple profiles simultaneously - 'meta profiles' - which would be a way to keep independent track on how you were doing with different sides of different cards. (this is how our new Anki emulation mode works, each 'side' has its own private hidden test profile)
The default behavior in 4.0 - our new built in "Learning" test type - has a set of fixed (customizable) intervals for a card to initially become 'learned' but then after that simply cycles through all of your 'review' cards without any particular attempt at optimizing intervals; we don't believe that for language learning, long term memory actually decays in the sort of exponential fashion that would make SRS worthwhile. (and it has some negative side effects, most importantly people's review queues getting too backlogged) So the distinctions between test types become less important with that, since all that an incorrect answer is doing is causing you to go back and repeat the card on that same type a few times before it once again returns to the review pool.
But if a lot of people start making similar requests to yours we can always consider making a more 'managed' take on Alternating a more official / easy-to-access option, all of this stuff is basically just a question of which particular JSON configuration files we ship built into the app