Skip to content

Commit 3c81f11

Browse files
q10facebook-github-bot
authored andcommitted
Create separate targets for training and inference (pytorch#1757)
Summary: Pull Request resolved: pytorch#1757 - Create separate targets for training and inference - Redefine the old `embedding_ops` and `embedding_ops` as an empty target with `exported_defs` pointing to the new split targets Differential Revision: D45687293 fbshipit-source-id: 151a36ea891e7fc209ad30f3daaac68ace254f2e
1 parent 0e24712 commit 3c81f11

File tree

3 files changed

+29
-2
lines changed

3 files changed

+29
-2
lines changed
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
/*
2+
* Copyright (c) Meta Platforms, Inc. and affiliates.
3+
* All rights reserved.
4+
* This source code is licensed under the BSD-style license found in the
5+
* LICENSE file in the root directory of this source tree.
6+
*/
7+
8+
/*
9+
This is placeholder code to force compilation and generation of an
10+
`libdeeplearning_fbgemm_fbgemm_gpu_codegen_embedding_ops.so` file, which
11+
allows downstream PyTorch code to contlinue loading the `embedding_ops“
12+
and `embedding_ops_cpu` (now-)shim targets correctly.
13+
*/
14+
namespace fbgemm_gpu {}

fbgemm_gpu/codegen/split_embedding_codegen_lookup_invoker.template

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ from .lookup_args import *
1010

1111

1212
{% if is_fbcode %}
13-
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops")
14-
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu")
13+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cuda_training")
14+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu_training")
1515
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:cumem_utils")
1616
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:sparse_ops")
1717
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:sparse_ops_cpu")

fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,19 @@
1919
from fbgemm_gpu.split_embedding_configs import EmbOptimType as OptimType, SparseType
2020
from torch import nn, Tensor # usort:skip
2121

22+
torch.ops.load_library(
23+
"//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cuda_training"
24+
)
25+
torch.ops.load_library(
26+
"//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cuda_inference"
27+
)
28+
torch.ops.load_library(
29+
"//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu_training"
30+
)
31+
torch.ops.load_library(
32+
"//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu_inference"
33+
)
34+
2235
DEFAULT_ASSOC = 32 if torch.version.hip is None else 64
2336
# Maximum number of times prefetch() can be called without
2437
# a corresponding forward() call

0 commit comments

Comments
 (0)