Skip to content

[Feature] Retries on Pydantic Validation Errors #7693

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 tasks
fmmoret opened this issue Feb 3, 2025 · 2 comments · May be fixed by #8050
Open
2 tasks

[Feature] Retries on Pydantic Validation Errors #7693

fmmoret opened this issue Feb 3, 2025 · 2 comments · May be fixed by #8050
Assignees
Labels
enhancement New feature or request

Comments

@fmmoret
Copy link

fmmoret commented Feb 3, 2025

What feature would you like to see?

Maybe I'm wrong but I thought I recalled the old version of DSPY (pre-litellm / pre-adapter era) doing retries for pydantic validation failures, but I'm not seeing any retrying in the codepaths I'm looking at in the stack trace (predict.py, json_adapter.py)
Is there a way built into the new version of DSPY to control retries? Or do I need to impl it into my own modules now?

E.g.:

pydantic_core._pydantic_core.ValidationError: 1 validation error for SomeSignature
some_list
  List should have at most 3 items after validation, not 4 [type=too_long, input_value=[1,2,3,4], input_type=list]
    For further information visit https://errors.pydantic.dev/2.9/v/too_long

I would expect this to feed back into the LLM to let it correct its outputs.

Would you like to contribute?

  • Yes, I'd like to help implement this.
  • No, I just want to request it.

Additional Context

No response

@fmmoret fmmoret added the enhancement New feature or request label Feb 3, 2025
@bnsh
Copy link

bnsh commented Feb 20, 2025

I agree.. So, here's what I see: When they switched to using LiteLLM: there's this:

num_retries: The number of times to retry a request if it fails transiently due to

Which seems to do something different from what I remember dspy.TypedPredictor doing..

Once upon a time: at least at commit hash b32b2ab:

When it would get a TypeError: it would

  1. Capture the error
  2. Make an example
  3. Send that back to the LLM

I agree that this was useful behaviour...

In looking into it tho, I found dspy.predict.retry.py... Which is totally commented out. I was all set to work on adding back the retrying in, till I saw that.. Can someone explain what the plan for retrying Pydantic validation failures?

@okhat
Copy link
Collaborator

okhat commented Feb 27, 2025

Below is a message that can be useful.

While we prepare a short tutorial on this: Folks looking to migrate from 2.5-style Assertions, you can now use dspy.BestOfN or dspy.Refine which replace the assertions functionality with streamlined modules instead.

module = dspy.ChainOfThought(...) # or a complex multi-step dspy.Module
module = dspy.BestOfN(module, N=5, reward_fn=reward_fn, threshold=1.0)

module(...) # at most 5 retries, picking best reward, but stopping if `threshold` is reached

where, reward functions can return scalar values like float or bool, e.g.

def reward_fn(input_kwargs, prediction):
    return len(prediction.field1) == len(prediction.field1)

@TomeHirata TomeHirata self-assigned this Mar 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants