Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow paralllevel as 1 to start torchrun npro-per-node #2608

Merged
merged 4 commits into from
Oct 9, 2023
Merged

Conversation

lxning
Copy link
Collaborator

@lxning lxning commented Sep 21, 2023

Description

Please read our CONTRIBUTING.md prior to creating your first pull request.

Please include a summary of the feature or issue being fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

This PR supports the example TP Llama which requires torchrun npro-per-node=1.

Fixes #(issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

The test is demonstrated in example.

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

@lxning lxning changed the title allow paralllevel as 1 to start torchrun npro-per-node [WIP]allow paralllevel as 1 to start torchrun npro-per-node Sep 21, 2023
@codecov
Copy link

codecov bot commented Sep 21, 2023

Codecov Report

Merging #2608 (ea95b96) into master (f57240f) will not change coverage.
The diff coverage is n/a.

❗ Current head ea95b96 differs from pull request most recent head 2cad147. Consider uploading reports for the commit 2cad147 to get more accurate results

@@           Coverage Diff           @@
##           master    #2608   +/-   ##
=======================================
  Coverage   72.39%   72.39%           
=======================================
  Files          85       85           
  Lines        3956     3956           
  Branches       58       58           
=======================================
  Hits         2864     2864           
  Misses       1088     1088           
  Partials        4        4           

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@lxning lxning changed the title [WIP]allow paralllevel as 1 to start torchrun npro-per-node allow paralllevel as 1 to start torchrun npro-per-node Sep 21, 2023
Copy link
Collaborator

@HamidShojanazeri HamidShojanazeri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @lxning LGTM, I could successfully test it.

Copy link
Collaborator

@mreso mreso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, see comments. Would be good to make sure that this scenario is covered by one of our unit tests.

logger.warn("Invalid parallelLevel:{}, set as 1", parallelLevel);
this.parallelLevel = 1;
if (parallelLevel < 0) {
logger.warn("Invalid parallelLevel:{}, set as 0", parallelLevel);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you set to 1 and then to -1 the level stays at 1 and is not set to 0 as indicated by the warning

@@ -153,7 +153,7 @@ public void startWorker(int port, String deviceIds)
argl.add(configManager.getMetricsConfigPath());

try {
latch = new CountDownLatch(model.getParallelLevel());
latch = new CountDownLatch(model.getParallelLevel() > 0 ? model.getParallelLevel() : 1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: you could replace this and in the following with a more concise max(1, model.getParallelLevel())

@lxning lxning enabled auto-merge October 6, 2023 00:49
@lxning lxning added this pull request to the merge queue Oct 8, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Oct 9, 2023
@lxning lxning added this pull request to the merge queue Oct 9, 2023
Merged via the queue into master with commit 504c734 Oct 9, 2023
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants