Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
xpu devices support llama-7b basic mode inference (turn on BlockAtten… #8588
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uh oh!
There was an error while loading. Please reload this page.
xpu devices support llama-7b basic mode inference (turn on BlockAtten… #8588
Changes from all commits
81d7e07
8d67d0f
8717c5e
File filter
Filter by extension
Conversations
Uh oh!
There was an error while loading. Please reload this page.
Jump to
Uh oh!
There was an error while loading. Please reload this page.
There are no files selected for viewing
Check warning on line 221 in paddlenlp/experimental/transformers/bloom/modeling.py
paddlenlp/experimental/transformers/bloom/modeling.py#L221
Check warning on line 596 in paddlenlp/experimental/transformers/bloom/modeling.py
paddlenlp/experimental/transformers/bloom/modeling.py#L596
Check warning on line 275 in paddlenlp/experimental/transformers/chatglm/modeling.py
paddlenlp/experimental/transformers/chatglm/modeling.py#L275
Check warning on line 204 in paddlenlp/experimental/transformers/chatglm_v2/modeling.py
paddlenlp/experimental/transformers/chatglm_v2/modeling.py#L204
Check warning on line 18 in paddlenlp/experimental/transformers/fused_transformer_layers.py
paddlenlp/experimental/transformers/fused_transformer_layers.py#L18
Check warning on line 33 in paddlenlp/experimental/transformers/fused_transformer_layers.py
paddlenlp/experimental/transformers/fused_transformer_layers.py#L32-L33
Check warning on line 38 in paddlenlp/experimental/transformers/fused_transformer_layers.py
paddlenlp/experimental/transformers/fused_transformer_layers.py#L38
Check warning on line 40 in paddlenlp/experimental/transformers/fused_transformer_layers.py
paddlenlp/experimental/transformers/fused_transformer_layers.py#L40
Check warning on line 1354 in paddlenlp/experimental/transformers/fused_transformer_layers.py
paddlenlp/experimental/transformers/fused_transformer_layers.py#L1352-L1354
Check warning on line 1383 in paddlenlp/experimental/transformers/fused_transformer_layers.py
paddlenlp/experimental/transformers/fused_transformer_layers.py#L1382-L1383
Check warning on line 1421 in paddlenlp/experimental/transformers/fused_transformer_layers.py
paddlenlp/experimental/transformers/fused_transformer_layers.py#L1421
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.