Skip to content

Commit ebd2029

Browse files
Change GPUS to GPUs (#36945)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com> Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
1 parent 69632aa commit ebd2029

File tree

8 files changed

+9
-9
lines changed

8 files changed

+9
-9
lines changed

ISSUES.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -263,9 +263,9 @@ You are not required to read the following guidelines before opening an issue. H
263263
But if you're replying to a comment that happened some comments back it's always a good practice to quote just the relevant lines you're replying it. The `>` is used for quoting, or you can always use the menu to do so. For example your editor box will look like:
264264

265265
```
266-
> How big is your gpu cluster?
266+
> How big is your GPU cluster?
267267
268-
Our cluster is made of 256 gpus.
268+
Our cluster is made of 256 GPUs.
269269
```
270270
271271
If you are addressing multiple comments, quote the relevant parts of each before your answer. Some people use the same comment to do multiple replies, others separate them into separate comments. Either way works. The latter approach helps for linking to a specific comment.

examples/legacy/seq2seq/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ th 56 \
209209
```
210210

211211
### Multi-GPU Evaluation
212-
here is a command to run xsum evaluation on 8 GPUS. It is more than linearly faster than run_eval.py in some cases
212+
here is a command to run xsum evaluation on 8 GPUs. It is more than linearly faster than run_eval.py in some cases
213213
because it uses SortishSampler to minimize padding. You can also use it on 1 GPU. `data_dir` must have
214214
`{type_path}.source` and `{type_path}.target`. Run `./run_distributed_eval.py --help` for all clargs.
215215

tests/quantization/eetq_integration/test_eetq.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ def test_save_pretrained(self):
158158
def test_quantized_model_multi_gpu(self):
159159
"""
160160
Simple test that checks if the quantized model is working properly with multiple GPUs
161-
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUS
161+
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs
162162
"""
163163
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
164164
quantization_config = EetqConfig()

tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -215,7 +215,7 @@ def test_change_loading_attributes(self):
215215
def test_quantized_model_multi_gpu(self):
216216
"""
217217
Simple test that checks if the quantized model is working properly with multiple GPUs
218-
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUS
218+
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs
219219
"""
220220
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
221221
quantization_config = FbgemmFp8Config()

tests/quantization/finegrained_fp8/test_fp8.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@ def test_block_size(self):
193193
def test_quantized_model_multi_gpu(self):
194194
"""
195195
Simple test that checks if the quantized model is working properly with multiple GPUs
196-
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUS
196+
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs
197197
"""
198198
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(self.device_map)
199199
quantization_config = FineGrainedFP8Config()

tests/quantization/higgs/test_higgs.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ def test_save_pretrained(self):
156156
def test_quantized_model_multi_gpu(self):
157157
"""
158158
Simple test that checks if the quantized model is working properly with multiple GPUs
159-
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUS
159+
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs
160160
"""
161161
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
162162
quantization_config = HiggsConfig()

tests/quantization/torchao_integration/test_torchao.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -255,7 +255,7 @@ def test_int4wo_offload(self):
255255
def test_int4wo_quant_multi_gpu(self):
256256
"""
257257
Simple test that checks if the quantized model int4 weight only is working properly with multiple GPUs
258-
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUS
258+
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs
259259
"""
260260

261261
quant_config = TorchAoConfig("int4_weight_only", **self.quant_scheme_kwargs)

tests/sagemaker/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ images:
138138
139139
## Current Tests
140140
141-
| ID | Description | Platform | #GPUS | Collected & evaluated metrics |
141+
| ID | Description | Platform | #GPUs | Collected & evaluated metrics |
142142
|-------------------------------------|-------------------------------------------------------------------|-----------------------------|-------|------------------------------------------|
143143
| pytorch-transfromers-test-single | test bert finetuning using BERT fromtransformerlib+PT | SageMaker createTrainingJob | 1 | train_runtime, eval_accuracy & eval_loss |
144144
| pytorch-transfromers-test-2-ddp | test bert finetuning using BERT from transformer lib+ PT DPP | SageMaker createTrainingJob | 16 | train_runtime, eval_accuracy & eval_loss |

0 commit comments

Comments
 (0)