-
Notifications
You must be signed in to change notification settings - Fork 610
Add permute_duplicate_pooled_embeddings op for CPU #1939
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for pytorch-fbgemm-docs canceled.
|
This pull request was exported from Phabricator. Differential Revision: D48305145 |
This pull request was exported from Phabricator. Differential Revision: D48305145 |
Summary: Pull Request resolved: pytorch#1939 This diff builds ontop of the pervious diff and adds support for permute_duplicate_pooled_embeddings for CPU. # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] Differential Revision: D48305145 fbshipit-source-id: 1ea502f74bf7aa7c16024dfc69283e2add5a6352
ad0c886
to
dc43d01
Compare
This pull request was exported from Phabricator. Differential Revision: D48305145 |
Summary: Pull Request resolved: pytorch#1939 This diff builds ontop of the pervious diff and adds support for permute_duplicate_pooled_embeddings for CPU. # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] Differential Revision: D48305145 fbshipit-source-id: 0e6e325eab8f1907991c22594a32e8f0937a914f
dc43d01
to
0efbeed
Compare
This pull request was exported from Phabricator. Differential Revision: D48305145 |
0efbeed
to
b7eb03d
Compare
Summary: Pull Request resolved: pytorch#1939 This diff builds ontop of the pervious diff and adds support for permute_duplicate_pooled_embeddings for CPU. # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] Reviewed By: sryap Differential Revision: D48305145 fbshipit-source-id: b36dd8557d9f58faa19e2f7410b97eb24d6ab615
This pull request was exported from Phabricator. Differential Revision: D48305145 |
Summary: Pull Request resolved: pytorch#1939 This diff builds ontop of the pervious diff and adds support for permute_duplicate_pooled_embeddings for CPU. # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] Reviewed By: sryap Differential Revision: D48305145 fbshipit-source-id: a984bebb9f8974015f3d2a4f6a806d7e6d391275
b7eb03d
to
41b4c9a
Compare
Summary: This diff builds ontop of the pervious diff and adds support for permute_duplicate_pooled_embeddings for CPU. # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] Reviewed By: sryap Differential Revision: D48305145
41b4c9a
to
1e9c48f
Compare
This pull request was exported from Phabricator. Differential Revision: D48305145 |
Summary: This diff builds ontop of the pervious diff and adds support for permute_duplicate_pooled_embeddings for CPU. # Background Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation. # Details The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list. Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Offset_dims: [0, 2, 5, 6, 10] Permute: [3, 0, 2, 1, 3] Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9] Reviewed By: sryap Differential Revision: D48305145
75396ce
to
1e9c48f
Compare
This pull request was exported from Phabricator. Differential Revision: D48305145 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D48305145 |
This pull request has been merged in 117bc3e. |
Summary:
This diff builds ontop of the pervious diff and adds support for permute_duplicate_pooled_embeddings for CPU.
Background
Currently permute_pooled_embs_gpu does not support duplicates in a permutation, this poses a problem with passing the same embeddings to multiple modules. This doc proposes a solution to allow duplicate subsets in the resultant permutation.
Details
The required implementation of permute_duplicate_pooled_embs_gpu should support a subset being repeated. This is represented by having duplicates in the permute list. This also results in the output list size being greater than the input list.
Input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Offset_dims: [0, 2, 5, 6, 10]
Permute: [3, 0, 2, 1, 3]
Output: [6, 7, 8, 9, 0, 1, 5, 2, 3, 4, 6, 7, 8, 9]
Differential Revision: D48305145