Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assert failure: Unhandled instruction #5203

Open
KeKsBoTer opened this issue Oct 1, 2024 · 4 comments
Open

Assert failure: Unhandled instruction #5203

KeKsBoTer opened this issue Oct 1, 2024 · 4 comments
Assignees
Labels
goal:client support Feature or fix needed for a current slang user.

Comments

@KeKsBoTer
Copy link

The following code sample results in this error:
(0): error 99999: Slang compilation aborted due to an exception of N5Slang13InternalErrorE: assert failure: Unhandled instruction
slang shader:

[AutoPyBindCUDA]
[CUDAKernel]
void gaussian_grad_bwd()
{
    var xz = diffPair(1.);
    bwd_diff(foo_gradient)(xz, float2(1.0));
}

[Differentiable]
float foo(float x)
{
    return exp(x);
}

[Differentiable]
float2 foo_gradient(float x)
{
    var x_diff = diffPair(x);

    bwd_diff(foo)(
        x_diff,
        1.0);

    return x_diff.d;
}

compiled with slangtorch:

import slangtorch
slangtorch.loadModule("error.slang")

The issue is the statement bwd_diff(foo)(x_diff,1.0);

slangtorch version: 1.2.6

@bmillsNV bmillsNV added the goal:client support Feature or fix needed for a current slang user. label Oct 3, 2024
@bmillsNV bmillsNV added this to the Q4 2024 (Fall) milestone Oct 3, 2024
@bmillsNV
Copy link
Collaborator

bmillsNV commented Oct 3, 2024

@saipraveenb25 can you help to take a look?

@saipraveenb25
Copy link
Collaborator

@KeKsBoTer:
This code looks like it is invoking bwd_diff multiple times on the same code. A double backwards pass is not currently well-supported.
Does your use-case require two reverse-mode derivative passes? The resulting code is usually very inefficient (atleast on the GPU).

For higher-order derivatives, it is (usually) much more efficient to use a bwd_diff over one or more fwd_diff (forward-mode derivative) calls to sweep the Hessian matrix row-by-row.

If you only need one reverse-mode pass, then there is no need to call bwd_diff again in gaussian_grad_bwd

@saipraveenb25 saipraveenb25 added the Needs reporter feedback Bugs awaiting more information from the reporter label Oct 3, 2024
@KeKsBoTer
Copy link
Author

Thanks for the quick reply!

I do use the forward mode now and it works for my case.

I just wanted to report this bug.

@bmillsNV
Copy link
Collaborator

@saipraveenb25 can you please update the autodiff user guide to ensure we include this restriction, then please close this issue.

@bmillsNV bmillsNV removed the Needs reporter feedback Bugs awaiting more information from the reporter label Nov 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
goal:client support Feature or fix needed for a current slang user.
Projects
None yet
Development

No branches or pull requests

3 participants