Skip to content

Commit abfce26

Browse files
q10facebook-github-bot
authored andcommitted
Create separate targets for training and inference (pytorch#1757)
Summary: Pull Request resolved: pytorch#1757 - Create separate targets for training and inference - Redefine the old `embedding_ops` and `embedding_ops` as an empty target with `exported_defs` pointing to the new split targets Differential Revision: D45687293 fbshipit-source-id: 3bb7d0dd95211a2ae464528a5714e2bbe041cffe
1 parent 0e24712 commit abfce26

File tree

3 files changed

+24
-2
lines changed

3 files changed

+24
-2
lines changed
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
/*
2+
* Copyright (c) Meta Platforms, Inc. and affiliates.
3+
* All rights reserved.
4+
* This source code is licensed under the BSD-style license found in the
5+
* LICENSE file in the root directory of this source tree.
6+
*/
7+
8+
/*
9+
This is placeholder code to force compilation and generation of an
10+
`libdeeplearning_fbgemm_fbgemm_gpu_codegen_embedding_ops.so` file, which
11+
allows downstream PyTorch code to contlinue loading the `embedding_ops“
12+
and `embedding_ops_cpu` (now-)shim targets correctly.
13+
*/
14+
namespace fbgemm_gpu {}

fbgemm_gpu/codegen/split_embedding_codegen_lookup_invoker.template

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ from .lookup_args import *
1010

1111

1212
{% if is_fbcode %}
13-
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops")
14-
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu")
13+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cuda_training")
14+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu_training")
1515
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:cumem_utils")
1616
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:sparse_ops")
1717
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:sparse_ops_cpu")

fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,19 @@
1414
from math import log2
1515
from typing import Dict, List, NamedTuple, Optional, Tuple, Type, Union
1616

17+
import fbgemm_gpu
1718
import fbgemm_gpu.split_embedding_codegen_lookup_invokers as invokers
1819
import torch # usort:skip
1920
from fbgemm_gpu.split_embedding_configs import EmbOptimType as OptimType, SparseType
2021
from torch import nn, Tensor # usort:skip
2122

23+
# pyre-fixme[16]: Module `fbgemm_gpu` has no attribute `open_source`.
24+
open_source: bool = getattr(fbgemm_gpu, "open_source", False)
25+
26+
if not open_source:
27+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops")
28+
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu")
29+
2230
DEFAULT_ASSOC = 32 if torch.version.hip is None else 64
2331
# Maximum number of times prefetch() can be called without
2432
# a corresponding forward() call

0 commit comments

Comments
 (0)