Skip to content

Commit 5ec6dc6

Browse files
q10facebook-github-bot
authored andcommitted
Create separate targets for training and inference (pytorch#1757)
Summary: Pull Request resolved: pytorch#1757 - Create separate targets for training and inference - Redefine the old `embedding_ops` and `embedding_ops` as an empty target with `exported_defs` pointing to the new split targets Reviewed By: sryap Differential Revision: D45687293 fbshipit-source-id: 9907225a3b557916dd11caccb2612528cca234a4
1 parent f46904e commit 5ec6dc6

File tree

3 files changed

+30
-2
lines changed

3 files changed

+30
-2
lines changed
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
/*
2+
* Copyright (c) Meta Platforms, Inc. and affiliates.
3+
* All rights reserved.
4+
*
5+
* This source code is licensed under the BSD-style license found in the
6+
* LICENSE file in the root directory of this source tree.
7+
*/
8+
9+
/*
10+
This is placeholder code to force compilation and generation of an
11+
`libdeeplearning_fbgemm_fbgemm_gpu_codegen_embedding_ops.so` file, which
12+
allows downstream PyTorch code to contlinue loading the `embedding_ops“
13+
and `embedding_ops_cpu` (now-)shim targets correctly.
14+
*/
15+
namespace fbgemm_gpu {}

fbgemm_gpu/codegen/split_embedding_codegen_lookup_invoker.template

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,15 @@ from .lookup_args import *
1010

1111

1212
{% if is_fbcode %}
13-
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops")
14-
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu")
13+
14+
# Provide compatibility to downstream packages for eventual migration to the split training / inference packages
15+
try:
16+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cuda_training")
17+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu_training")
18+
except Exception:
19+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops")
20+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu")
21+
1522
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:cumem_utils")
1623
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:sparse_ops")
1724
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:sparse_ops_cpu")

fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,12 @@
1919
from fbgemm_gpu.split_embedding_configs import EmbOptimType as OptimType, SparseType
2020
from torch import nn, Tensor # usort:skip
2121

22+
try:
23+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops")
24+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu")
25+
except Exception:
26+
pass
27+
2228
DEFAULT_ASSOC = 32 if torch.version.hip is None else 64
2329
# Maximum number of times prefetch() can be called without
2430
# a corresponding forward() call

0 commit comments

Comments
 (0)