Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

example: Add an example of Pointnet inference implementation #2845

Draft
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

s-Nick
Copy link
Contributor

@s-Nick s-Nick commented Mar 10, 2025

Description

Converted to DRAFT to address comments, it will be closed and moved to oneAPI Samples as suggested once comments are solved

This PR adds a useful example of how to implement a full model using oneDNN. It implements inference of Pointnet model using the ModelNet10 dataset. With the example there is also a python script that, using a pre-trained model, converts data to the point cloud to use as input for the inference example.

The introduction of this example it is necessary to help us moving existing portDNN users to oneDNN, showing that everything they are used to achieve with portDNN it's possible with oneDNN. It would also allow Codeplay Software to properly archive portDNN.

Checklist

General

  • Have you formatted the code using clang-format?

Sorry, something went wrong.

s-Nick added 2 commits March 10, 2025 09:10
Add an implementation of pointnet model as example of more complex NN.
Example working with ModelNet10 input
@s-Nick s-Nick requested review from a team as code owners March 10, 2025 09:23
@github-actions github-actions bot added documentation A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc component:examples labels Mar 10, 2025
s-Nick added 4 commits March 12, 2025 09:07
Output of a layer was stored in a pointer, now the output is stored in
dnnl::memory object that is passed to the following layer. This allow
the removals the need synchronizing after each layer execution.
properly enable bias in FCLayer
The oneDNN samples are built in the default CMake configuration. The sample
is built by the target `network-pointnet-cpp`. The samples must first
be passed the directory where the binary weights files are stored and the second
argument should be the preprocessed pointcloud that should be classified. The expected
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oneDNN examples are used in various CI environments and across different architectures. Dependency on Pytorch would be a bit of a hassle. I would suggest to put this one into oneAPI Samples instead of the main repo.

+@onednnsupporttriage

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for suggesting a better place for our sample. Once all comments are solved, I'll open another PR to the appropriate repo.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oneAPI oneDNN sample code(https://github.com/oneapi-src/oneAPI-samples/tree/master/Libraries/oneDNN) looks a better place for this sample, @onednnsupporttriage can help on this code check.

@s-Nick s-Nick marked this pull request as draft March 12, 2025 16:24
Copy link
Contributor

@ranukund ranukund left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few edits suggested, please incorporate as you see fit.

@@ -0,0 +1,35 @@
# PointNet Convolutional Neural Network Sample for 3D Pointcloud Classification

[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, showing a larger example of using oneDNN. Some rough instructions for how it might be used are provided.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, showing a larger example of using oneDNN. Some rough instructions for how it might be used are provided.
[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, providing a comprehensive example of using oneDNN. You can see the following initial instructions on using the samples.


[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, showing a larger example of using oneDNN. Some rough instructions for how it might be used are provided.

## Obtaining the model weights and classes and preparing an input pointcloud
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Obtaining the model weights and classes and preparing an input pointcloud
## Obtain the Model Weights and Classes and Prepare an Input pointcloud


## Obtaining the model weights and classes and preparing an input pointcloud

A preprocessing script is provided which unpacks the weights from a pretrained pytorch model. The script also prepares an input pointcloud for testing inference. The pointcloud is made from 3D scans taken from the [ModelNet10][modelnet] dataset. The script requires an installation of [PyTorch][pytorch]. First download the pretrained PointNet weights and move the pth file into the same directory of the model.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A preprocessing script is provided which unpacks the weights from a pretrained pytorch model. The script also prepares an input pointcloud for testing inference. The pointcloud is made from 3D scans taken from the [ModelNet10][modelnet] dataset. The script requires an installation of [PyTorch][pytorch]. First download the pretrained PointNet weights and move the pth file into the same directory of the model.
A preprocessing script is provided which unpacks the weights from a pre-trained PyTorch model. The script also prepares an input pointcloud for testing inference. The pointcloud is made from 3D scans taken from the [ModelNet10][modelnet] dataset. The script requires an installation of [PyTorch][pytorch].
First, download the pre-trained PointNet weights and then move the pth file into the same directory containing the model.

python3 prepareData.py ModelNet10/ pointnet_model.pth
```

The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin`
The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin`.


The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin`

## Testing on a pointcloud
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Testing on a pointcloud
## Test on a pointcloud

is built by the target `network-pointnet-cpp`. The samples must first
be passed the directory where the binary weights files are stored and the second
argument should be the preprocessed pointcloud that should be classified. The expected
output is of a classification index and a series of times in nanoseconds that corresond
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
output is of a classification index and a series of times in nanoseconds that corresond
output is a classification index and a series of times in nanoseconds that correspond

## Testing on a pointcloud

The oneDNN samples are built in the default CMake configuration. The sample
is built by the target `network-pointnet-cpp`. The samples must first
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the directory containing the binary weight files passed as the first argument when running a oneDNN sample? Suggesting rewrite for consideration to improve readability, please modify as you see fit

To test a sample, the directory where the binary weights files are stored must be passed as the first argument. The second argument should be the preprocessed pointcloud that should be classified.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your feedback @ranukund, I update everything in dedc77d

## Test on a pointcloud

The oneDNN samples are built in the default CMake configuration. The sample
is built by the target `network-pointnet-cpp`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this example will be moved to oneAPI Samples and will not be built with the oneDNN source code, could you please add the bash command line to build this example and link libdnnl.so?

Copy link
Contributor

@ranukund ranukund left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor edits suggested, thanks!

to the total time to run the network on an input, not including data transfer time.
is built by the target `network-pointnet-cpp`.
The oneDNN samples are built using the default CMake configuration.
To run the sample, provide as first argument the path to the directory containing the binary weight files and as second argument the path to the preprocessed point cloud file to be classified.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To run the sample, provide the path to the directory containing the binary weight files as the first argument and the path to the preprocessed point cloud file to be classified as the second argument.


## Obtaining the model weights and classes and preparing an input pointcloud
## Obtain the Model Weights and Classes and Preparing an Input pointcloud
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Obtain the Model Weights and Classes and Prepare an Input pointcloud


A preprocessing script is provided which unpacks the weights from a pretrained pytorch model. The script also prepares an input pointcloud for testing inference. The pointcloud is made from 3D scans taken from the [ModelNet10][modelnet] dataset. The script requires an installation of [PyTorch][pytorch]. First download the pretrained PointNet weights and move the pth file into the same directory of the model.
A preprocessing script is provided which unpacks the weights from a pre-trained pytorch model. The script also prepares an input pointcloud for testing inference. The pointcloud is made from 3D scans taken from the [ModelNet10][modelnet] dataset. The script requires an installation of [PyTorch][pytorch].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A preprocessing script is provided which unpacks the weights from a pre-trained PyTorch model.

@kminemur
Copy link

kminemur commented Mar 15, 2025

Thank you for the great commit introducing this new sample!

However, I encountered some issues while trying to run the sample on Ubuntu24.04 LTS environment. The current steps may not be completed, particularly regarding the target model version and the corresponding PyTorch version. Additionally, there are some setup steps that need clarification:

Python Environment:
It’s unclear which numpy and torch versions are compatible with the point cloud dataset and the pre-trained model provided.

Necessary packages include:

pip install numpy==xxx
pip install torch==yyy

Extracted Data Manipulation:
Running prepareData.py failed due to redundant files being extracted when unzipping ModelNet10.zip.

Files like .DS_Store and README.txt in the ModelNet10 directory need to be removed for the script to work properly.

Execution example
using the pre-trained mode from mode https://github.com/nikitakaraevv/pointnet?tab=readme-ov-file. Got incorrect result.

$ ls
bathtub_cloud.bin  desk_cloud.bin     night_stand_cloud.bin  README.md        toilet_cloud.bin
bed_cloud.bin      dresser_cloud.bin  pointnet.cpp           save.pth
chair_cloud.bin    ModelNet10         pointnet_venv          sofa_cloud.bin
data               monitor_cloud.bin  prepareData.py         table_cloud.bin

$ pip list | grep torch
torch                    2.6.0

$ network-pointnet-cpp data sofa_cloud.bin
classed as 0 (i.e., bathtub)
$ network-pointnet-cpp data table_cloud.bin
classed as 0 (i.e., bathtub)

constexpr int axis = 1;

softmax_pd = dnnl::softmax_forward::primitive_desc(this->engine_,
dnnl::prop_kind::forward_training, algo, src_md,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: Should be dnnl::prop_kind::forward_inference

void execute(dnnl::memory &in_mem) override {
auto pooling_prim = dnnl::pooling_forward(pooling_pd);

// Primitive arguments. Set up in-place execution by assigning src as DST.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to be an unrelated comment, pooling does not support in-place operations

Comment on lines +353 to +357
matmul_pd = dnnl::matmul::primitive_desc(
this->engine_, src_md, weights_md, bias_md, this->out_desc_);

// Create the primitive.
auto matmul_prim = dnnl::matmul(matmul_pd);
Copy link
Contributor

@AD2605 AD2605 Mar 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The creation of primitive descriptor and the primitive should be inside the class constructor, otherwise to each execute call, you will have the creation of primitive descriptor and the primitive, which are expensive processes. The primitive descriptor creation iterates over the available implementations and chooses the first implementation which returns status::success, and is also responsible for allocating any scratchpad memory typically. The primitive creation builds the kernel, thus these needs to be moved to the class constructor. Thus in the execute, all you need to do is set the arguments and invoke layer.

Same applies for all other layers as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:examples documentation A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants