Skip to content

Adding support for different types for spec and push consts#242

Merged
axsaucedo merged 19 commits intomasterfrom
multi_types_consts
Sep 12, 2021
Merged

Adding support for different types for spec and push consts#242
axsaucedo merged 19 commits intomasterfrom
multi_types_consts

Conversation

@axsaucedo
Copy link
Copy Markdown
Member

@axsaucedo axsaucedo commented Sep 12, 2021

Fixes:

Adds support for different types. For Push consts it is possible to add custom structs.

This means that constants can be defined as:

struct TestConsts{
    float x;
    uint32_t y;
    int32_t z;
};

std::shared_ptr<kp::Algorithm> algo = mgr.algorithm<float, TestConsts>(..., {{ 0.1f, 10, -10 }});

but they can also be defined as array, as follows:

std::shared_ptr<kp::Algorithm> algo = mgr.algorithm<double, double>(..., {0.1, 0.2, 0.3});

By default it is possible to continue just using float, without template definition:

std::shared_ptr<kp::Algorithm> algo = mgr.algorithm(..., {0.1f, 0.2f, 0.3f});

Full example below:

std::string shader(R"(
  #version 450
  layout(push_constant) uniform PushConstants {
    float x;
    uint y;
    int z;
  } pcs;
  layout (local_size_x = 1) in;
  layout(set = 0, binding = 0) buffer a { float pa[]; };
  void main() {
      pa[0] += pcs.x;
      pa[1] += pcs.y - 2147483000;
      pa[2] += pcs.z;
  })");

struct TestConsts{
    float x;
    uint32_t y;
    int32_t z;
};

std::vector<uint32_t> spirv = compileSource(shader);

std::shared_ptr<kp::Sequence> sq = nullptr;

{
    kp::Manager mgr;

    std::shared_ptr<kp::TensorT<float>> tensor =
      mgr.tensorT<float>({ 0, 0, 0 });

    std::shared_ptr<kp::Algorithm> algo = mgr.algorithm<float, TestConsts>(
      { tensor }, spirv, kp::Workgroup({ 1 }), {}, {{ 0, 0, 0 }});

    sq = mgr.sequence()->eval<kp::OpTensorSyncDevice>({ tensor });

    // We need to run this in sequence to avoid race condition
    // We can't use atomicAdd as swiftshader doesn't support it for
    // float
    sq->eval<kp::OpAlgoDispatch>(algo, std::vector<TestConsts>{{ 15.32, 2147483650, 10 }});
    sq->eval<kp::OpAlgoDispatch>(algo, std::vector<TestConsts>{{ 30.32, 2147483650, -3 }});
    sq->eval<kp::OpTensorSyncLocal>({ tensor });

    EXPECT_EQ(tensor->vector(), std::vector<float>({ 45.64, 1300, 7 }));
}

Python interface

The python interface is exposed as follows:

spirv = compile_source("""
      #version 450
      layout(push_constant) uniform PushConstants {
        int x;
        int  y;
        int  z;
      } pcs;
      layout (local_size_x = 1) in;
      layout(set = 0, binding = 0) buffer a { int  pa[]; };
      void main() {
          pa[0] += pcs.x;
          pa[1] += pcs.y;
          pa[2] += pcs.z;
      }
""")

mgr = kp.Manager()

tensor = mgr.tensor_t(np.array([0, 0, 0], dtype=np.int32))

spec_consts = np.array([], dtype=np.int32)
push_consts = np.array([-1, -1, -1], dtype=np.int32)

algo = mgr.algorithm_t([tensor], spirv, (1, 1, 1), spec_consts, push_consts)

(mgr.sequence()
    .record(kp.OpTensorSyncDevice([tensor]))
    .record(kp.OpAlgoDispatch(algo))
    .record(kp.OpAlgoDispatch(algo, np.array([-1, -1, -1], dtype=np.int32)))
    .record(kp.OpAlgoDispatch(algo, np.array([-1, -1, -1], dtype=np.int32)))
    .record(kp.OpTensorSyncLocal([tensor]))
    .eval())

assert np.all(tensor.data() == np.array([-3, -3, -3], dtype=np.int32))

@axsaucedo axsaucedo force-pushed the multi_types_consts branch 5 times, most recently from bab3cf2 to 646fe61 Compare September 12, 2021 11:23
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
Signed-off-by: Alejandro Saucedo <axsauze@gmail.com>
@axsaucedo axsaucedo merged commit b7643a1 into master Sep 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant