Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Proposal to Update PPO Test to Add LR Scheduler #2423

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

Seoley
Copy link

@Seoley Seoley commented Feb 22, 2025

Context

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Changelog

Related Issue: #2064 (comment)

I refered to #2263 and add an lr_scheduler option to the Mistral PPO recipe. Then, run the recipe using the following command:
tune run ppo_full_finetune_single_device --config mistral/7B_full_ppo_low_memory

Here are the loss curves for reference:
PPO_loss_curve

And then, I need to update the test code in recipes/test_ppo_full_finetune_single_device.py

This commit includes changes where I added the lr_scheduler option to recipes/test_ppo_full_finetune_single_device.py, but the test has failed. I suspect the failure might be due to using LLaMA.
How should I resolve this issue? I would like to ask for some advice.

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Feb 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2423

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 17cb9ed with merge base 952078e (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @Seoley!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 23, 2025
@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@codecov-commenter
Copy link

Codecov Report

Attention: Patch coverage is 0% with 19 lines in your changes missing coverage. Please review.

Project coverage is 65.30%. Comparing base (e6cba25) to head (17cb9ed).
Report is 6 commits behind head on main.

Files with missing lines Patch % Lines
recipes/ppo_full_finetune_single_device.py 0.00% 19 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2423      +/-   ##
==========================================
+ Coverage   63.87%   65.30%   +1.43%     
==========================================
  Files         368      374       +6     
  Lines       21873    22163     +290     
==========================================
+ Hits        13971    14474     +503     
+ Misses       7902     7689     -213     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@joecummings joecummings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SalmanMohammadi Can you take a look?

self._steps_per_epoch = (
len(self._dataloader) // self._gradient_accumulation_steps
)
self.global_step = self._epochs_run * self._steps_per_epoch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

global_step is already defined here

- how come you're re-defining it here?

@@ -257,6 +258,18 @@ def setup(self, cfg: DictConfig) -> None:
* (self.batch_size // self._ppo_batch_size)
)

self._steps_per_epoch = (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should name this lr_steps and the correct value would be the total number of optimizer steps being taken, which should be self._total_steps * self._ppo_epochs * (self.batch_size // self._ppo_batch_size), right?

@SalmanMohammadi
Copy link
Collaborator

Hi @Seoley! Thanks so much for taking this on. Sorry for the delay in getting round to a review here.

I think this generally looks good - I think the only comments I have are how we calculate the number of steps for the scheduler, which I've left inline. Let me know what you think : )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants