Skip to content

Commit a30dbbd

Browse files
committed
doc: updated name to oneDNN
1 parent 818e031 commit a30dbbd

File tree

104 files changed

+428
-412
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

104 files changed

+428
-412
lines changed

.github/ISSUE_TEMPLATE/bug_report.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,11 @@ Provide a short summary of the issue. Sections below provide guidance on what
1111
factors are considered important to reproduce an issue.
1212

1313
# Version
14-
Report DNNL version and githash. Version information is printed to stdout
14+
Report oneDNN version and githash. Version information is printed to stdout
1515
in [verbose mode](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html).
1616

1717
# Environment
18-
DNNL includes hardware-specific optimizations and may behave
18+
oneDNN includes hardware-specific optimizations and may behave
1919
differently on depending on the compiler and build environment. Include
2020
the following information to help reproduce the issue:
2121
* CPU make and model (try `lscpu`; if your `lscpu` does not list CPU flags,

CMakeLists.txt

+3-3
Original file line numberDiff line numberDiff line change
@@ -70,8 +70,8 @@ if("${CMAKE_BUILD_TYPE}" STREQUAL "")
7070
"Choose the type of build, options are: None Debug Release RelWithDebInfo MinSizeRel ...")
7171
endif()
7272

73-
set(PROJECT_NAME "DNNL")
74-
set(PROJECT_FULL_NAME "Deep Neural Network Library (DNNL)")
73+
set(PROJECT_NAME "oneDNN")
74+
set(PROJECT_FULL_NAME "oneAPI Deep Neural Network Library (oneDNN)")
7575
set(PROJECT_VERSION "1.4.0")
7676

7777
set(LIB_NAME dnnl)
@@ -84,7 +84,7 @@ else()
8484
endif()
8585

8686
if (NOT CMAKE_SIZEOF_VOID_P EQUAL 8)
87-
message(FATAL_ERROR "DNNL supports 64 bit platforms only")
87+
message(FATAL_ERROR "oneDNN supports 64 bit platforms only")
8888
endif()
8989

9090
# Set the target architecture.

CONTRIBUTING.md

+11-11
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Contributing guidelines
22

3-
If you have improvements to the DNNL code, please send us your pull
3+
If you have improvements to the oneDNN code, please send us your pull
44
requests! For getting started, see GitHub
55
[howto](https://help.github.com/en/articles/about-pull-requests).
66

@@ -27,7 +27,7 @@ list.
2727

2828
## Library functionality guidelines
2929

30-
DNNL focuses on functionality that satisfies all of the following
30+
oneDNN focuses on functionality that satisfies all of the following
3131
criteria:
3232

3333
1. *Performance*: the functionality has material impact on a workload level.
@@ -53,28 +53,28 @@ primitives. In the RFC, please provide the following details:
5353
significant percentage of the total time and thus is a good optimization
5454
candidate.
5555

56-
* The definition of the operation as an DNNL primitive including interface
56+
* The definition of the operation as a oneDNN primitive including interface
5757
and semantics. It is OK to have sketches for the interface, but the
5858
semantics should be fairly well defined.
5959

6060
* If possible, provide information about similar compute operations. Sometimes
61-
DNNL primitives are super-sets of operations available in the
61+
oneDNN primitives are super-sets of operations available in the
6262
deep learning applications for the sake of greater portability across them.
6363

6464
## Code contribution guidelines
6565

6666
The code must be:
6767

68-
* *Tested*: DNNL uses gtests for lightweight functional testing and
68+
* *Tested*: oneDNN uses gtests for lightweight functional testing and
6969
benchdnn for functionality that requires both performance and functional
7070
testing.
7171

72-
* *Documented*: DNNL uses Doxygen for inline comments in public header
72+
* *Documented*: oneDNN uses Doxygen for inline comments in public header
7373
files that is used to build reference manual and markdown (also processed by
7474
Doxygen) for user guide.
7575

76-
* *Portable*: DNNL supports different operating systems, CPU and GPU
77-
architectures, compilers, and run-times. The new code should be complaint
76+
* *Portable*: oneDNN supports different operating systems, CPU and GPU
77+
architectures, compilers, and run-times. The new code should be compliant
7878
with the [System Requirements](README.md#system-requirements).
7979

8080
## Coding style
@@ -96,14 +96,14 @@ If in doubt, use the `clang-format`:
9696
```sh
9797
clang-format -style=file -i foo.cpp
9898
```
99-
This will format code using the `_clang_format` file found in the Intel
100-
DNNL top level directory.
99+
This will format code using the `_clang_format` file found in the oneDNN
100+
top level directory.
101101

102102
Coding style is secondary to the general code design.
103103

104104
## Unit tests
105105

106-
DNNL uses gtests for lightweight functional testing and benchdnn for
106+
oneDNN uses gtests for lightweight functional testing and benchdnn for
107107
performance and functional testing.
108108

109109
Be sure to extend the existing tests when fixing an issue.

README.binary.in

+8-8
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
Deep Neural Network Library (DNNL)
2-
==================================
1+
oneAPI Deep Neural Network Library (oneDNN)
2+
===========================================
33

4-
Deep Neural Network Library (DNNL) is an
4+
oneAPI Deep Neural Network Library (oneDNN) is an
55
open-source performance library for deep learning applications. The library
66
includes basic building blocks for neural networks optimized
77
for Intel Architecture Processors and Intel Processor Graphics.
88

9-
This package contains DNNL v@PROJECT_VERSION@ (@DNNL_VERSION_HASH@).
9+
This package contains oneDNN v@PROJECT_VERSION@ (@DNNL_VERSION_HASH@).
1010

1111
You can find information about the latest version and release notes
12-
at DNNL Github (https://github.com/oneapi-src/oneDNN/releases).
12+
at oneDNN Github (https://github.com/oneapi-src/oneDNN/releases).
1313

1414
Documentation
1515
-------------
@@ -23,7 +23,7 @@ provides comprehensive reference of the library API.
2323
System Requirements
2424
-------------------
2525

26-
DNNL supports systems based on Intel 64 or AMD64 architecture.
26+
oneDNN supports systems based on Intel 64 or AMD64 architecture.
2727

2828
The library is optimized for the following CPUs:
2929
* Intel Atom processor with Intel SSE4.1 support
@@ -34,7 +34,7 @@ The library is optimized for the following CPUs:
3434
* Intel Xeon Scalable processor (formerly Skylake and Cascade Lake)
3535
* future Intel Xeon Scalable processor (code name Cooper Lake)
3636

37-
DNNL detects instruction set architecture (ISA) in the runtime and uses
37+
oneDNN detects instruction set architecture (ISA) in the runtime and uses
3838
just-in-time (JIT) code generation to deploy the code optimized
3939
for the latest supported ISA.
4040

@@ -98,7 +98,7 @@ prior notification in future releases:
9898
License
9999
-------
100100

101-
DNNL is licensed under Apache License Version 2.0. Refer to the "LICENSE"
101+
oneDNN is licensed under Apache License Version 2.0. Refer to the "LICENSE"
102102
file for the full license text and copyright notice.
103103

104104
This distribution includes third party software governed by separate license

README.md

+14-14
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
Deep Neural Network Library (DNNL)
2-
==================================
1+
oneAPI Deep Neural Network Library (oneDNN)
2+
===========================================
33

44
> **Note**
55
>
@@ -11,15 +11,15 @@ Deep Neural Network Library (DNNL)
1111
> Version 1.0 brings incompatible changes to the 0.20 version. Please read
1212
> [Version 1.0 Transition Guide](https://oneapi-src.github.io/oneDNN/dev_guide_transition_to_v1.html).
1313
14-
Deep Neural Network Library (DNNL) is an
14+
oneAPI Deep Neural Network Library (oneDNN) is an
1515
open-source performance library for deep learning applications. The library
1616
includes basic building blocks for neural networks optimized
1717
for Intel Architecture Processors and Intel Processor Graphics.
1818

19-
DNNL is intended for deep learning applications and framework
19+
oneDNN is intended for deep learning applications and framework
2020
developers interested in improving application performance
2121
on Intel CPUs and GPUs. Deep learning practitioners should use one of the
22-
applications enabled with DNNL:
22+
applications enabled with oneDNN:
2323
* [Apache\* MXNet](https://mxnet.apache.org)
2424
* [BigDL](https://github.com/intel-analytics/BigDL)
2525
* [Caffe\* Optimized for Intel Architecture](https://github.com/intel/caffe)
@@ -77,7 +77,7 @@ If the configuration you need is not available, you can
7777

7878
# System Requirements
7979

80-
DNNL supports systems based on
80+
oneDNN supports systems based on
8181
[Intel 64 or AMD64 architecture](https://en.wikipedia.org/wiki/X86-64).
8282

8383
The library is optimized for the following CPUs:
@@ -89,13 +89,13 @@ The library is optimized for the following CPUs:
8989
* Intel Xeon Scalable processor (formerly Skylake and Cascade Lake)
9090
* future Intel Xeon Scalable processor (code name Cooper Lake)
9191

92-
DNNL detects instruction set architecture (ISA) in the runtime and uses
92+
oneDNN detects instruction set architecture (ISA) at runtime and uses
9393
just-in-time (JIT) code generation to deploy the code optimized
9494
for the latest supported ISA.
9595

9696
> **WARNING**
9797
>
98-
> On macOS, applications that use DNNL may need to request special
98+
> On macOS, applications that use oneDNN may need to request special
9999
> entitlements if they use the hardened runtime. See the
100100
> [linking guide](https://oneapi-src.github.io/oneDNN/dev_guide_link.html)
101101
> for more details.
@@ -107,7 +107,7 @@ The library is optimized for the following GPUs:
107107

108108
## Requirements for Building from Source
109109

110-
DNNL supports systems meeting the following requirements:
110+
oneDNN supports systems meeting the following requirements:
111111
* Operating system with Intel 64 architecture support
112112
* C++ compiler with C++11 standard support
113113
* [CMake](https://cmake.org/download/) 2.8.11 or later
@@ -120,7 +120,7 @@ dependencies.
120120
### CPU Engine
121121

122122
Intel Architecture Processors and compatible devices are supported by the
123-
DNNL CPU engine. The CPU engine is built by default and cannot
123+
oneDNN CPU engine. The CPU engine is built by default and cannot
124124
be disabled at build time. The engine can be configured to use the OpenMP or
125125
TBB threading runtime. The following additional requirements apply:
126126
* OpenMP runtime requires C++ compiler with OpenMP 2.0 or later standard support
@@ -133,7 +133,7 @@ the Intel C++ Compiler for the best performance results.
133133

134134
### GPU Engine
135135

136-
Intel Processor Graphics is supported by the DNNL GPU engine. The GPU
136+
Intel Processor Graphics is supported by the oneDNN GPU engine. The GPU
137137
engine is disabled in the default build configuration. The following
138138
additional requirements apply when GPU engine is enabled:
139139
* OpenCL\* runtime library (OpenCL version 1.2 or later)
@@ -142,7 +142,7 @@ additional requirements apply when GPU engine is enabled:
142142

143143
### Runtime Dependencies
144144

145-
When DNNL is built from source, the library runtime dependencies
145+
When oneDNN is built from source, the library runtime dependencies
146146
and specific versions are defined by the build environment.
147147

148148
#### Linux
@@ -241,7 +241,7 @@ You may reach out to project maintainers privately at dnnl.maintainers@intel.com
241241
242242
# Contributing
243243

244-
We welcome community contributions to DNNL. If you have an idea on how
244+
We welcome community contributions to oneDNN. If you have an idea on how
245245
to improve the library:
246246

247247
* For changes impacting the public API, submit
@@ -261,7 +261,7 @@ contributors are expected to adhere to the
261261

262262
# License
263263

264-
DNNL is licensed under [Apache License Version 2.0](LICENSE). Refer to the
264+
oneDNN is licensed under [Apache License Version 2.0](LICENSE). Refer to the
265265
"[LICENSE](LICENSE)" file for the full license text and copyright notice.
266266

267267
This distribution includes third party software governed by separate license

THIRD-PARTY-PROGRAMS

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Deep Neural Network Library Third Party Programs File
1+
oneAPI Deep Neural Network Library (oneDNN) Third Party Programs File
22

33
This file contains the list of third party software ("third party programs")
44
contained in the Intel software and their required notices and/or license

cmake/TBB.cmake

+1
Original file line numberDiff line numberDiff line change
@@ -40,3 +40,4 @@ if(TBB_FOUND)
4040
unset(_tbb_include_dirs)
4141
unset(_tbb_root)
4242
endif()
43+

cmake/Threading.cmake

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ macro(find_package_tbb)
5151
TBB_INTERFACE_VERSION "${_tbb_stddef}")
5252
if (${TBB_INTERFACE_VERSION} VERSION_LESS 9100)
5353
if("${mode}" STREQUAL REQUIRED)
54-
message(FATAL_ERROR "DNNL requires TBB version 2017 or above")
54+
message(FATAL_ERROR "oneDNN requires TBB version 2017 or above")
5555
endif()
5656
unset(TBB_FOUND)
5757
endif()

cmake/gen_mkldnn_compat_cmakes.cmake

+1-1
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# limitations under the License.
1515
#===============================================================================
1616

17-
# Creates cmake config for MKLDNN based on DNNL one
17+
# Creates cmake config for MKLDNN based on oneDNN one
1818
# (by replacing DNNL with MKLDNN)
1919
# Parameters:
2020
# DIR -- path to cmake install dir

cmake/lnx/TBBConfig.cmake

+2-2
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ if (NOT _tbbmalloc_proxy_ix EQUAL -1)
3434
endif()
3535
endif()
3636

37-
# DNNL changes: use TBBROOT to locate Intel TBB
37+
# oneDNN changes: use TBBROOT to locate Intel TBB
3838
# get_filename_component(_tbb_root "${CMAKE_CURRENT_LIST_FILE}" PATH)
3939
# get_filename_component(_tbb_root "${_tbb_root}" PATH)
4040
if (NOT TBBROOT)
@@ -126,7 +126,7 @@ foreach (_tbb_component ${TBB_FIND_COMPONENTS})
126126
set(_tbb_release_lib "${_tbb_lib_path}/lib${_tbb_component}.so.2")
127127
set(_tbb_debug_lib "${_tbb_lib_path}/lib${_tbb_component}_debug.so.2")
128128

129-
# DNNL change: check library existence (BUILD_MODE related only, not both)
129+
# oneDNN change: check library existence (BUILD_MODE related only, not both)
130130
string(TOUPPER "${CMAKE_BUILD_TYPE}" UPPERCASE_CMAKE_BUILD_TYPE)
131131
if (UPPERCASE_CMAKE_BUILD_TYPE STREQUAL "DEBUG")
132132
if (EXISTS "${_tbb_debug_lib}")

cmake/mac/TBBConfig.cmake

+1-1
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ if (NOT _tbbmalloc_proxy_ix EQUAL -1)
3434
endif()
3535
endif()
3636

37-
# DNNL changes: use TBBROOT to locate Intel TBB
37+
# oneDNN changes: use TBBROOT to locate Intel TBB
3838
# get_filename_component(_tbb_root "${CMAKE_CURRENT_LIST_FILE}" PATH)
3939
# get_filename_component(_tbb_root "${_tbb_root}" PATH)
4040
if (NOT TBBROOT)

cmake/mkldnn_compat.cmake

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
# Provides compatibility with Intel MKL-DNN build options
1818
#===============================================================================
1919

20-
# Sets if dnnl var is unset, copy the value from mkldnn var
20+
# Sets if DNNL_* var is unset, copy the value from corresponding MKLDNN_* var
2121
macro(mkldnn_compat_var dnnl_var mkldnn_var props)
2222
if (DEFINED ${mkldnn_var} AND NOT DEFINED ${dnnl_var})
2323
if ("${props}" STREQUAL "CACHE STRING")

cmake/options.cmake

+6-6
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ set(options_cmake_included true)
2727
# ========
2828

2929
option(DNNL_VERBOSE
30-
"allows DNNL be verbose whenever DNNL_VERBOSE
30+
"allows oneDNN be verbose whenever DNNL_VERBOSE
3131
environment variable set to 1" ON) # enabled by default
3232

3333
option(DNNL_ENABLE_CONCURRENT_EXEC
@@ -43,24 +43,24 @@ option(DNNL_ENABLE_PRIMITIVE_CACHE "enables primitive cache.
4343
# disabled by default
4444

4545
option(DNNL_ENABLE_MAX_CPU_ISA
46-
"enables control of CPU ISA detected by DNNL via DNNL_MAX_CPU_ISA
46+
"enables control of CPU ISA detected by oneDNN via DNNL_MAX_CPU_ISA
4747
environment variable and dnnl_set_max_cpu_isa() function" ON)
4848

4949
# =============================
5050
# Building properties and scope
5151
# =============================
5252

5353
set(DNNL_LIBRARY_TYPE "SHARED" CACHE STRING
54-
"specifies whether DNNL library should be SHARED or STATIC")
54+
"specifies whether oneDNN library should be SHARED or STATIC")
5555
option(DNNL_BUILD_EXAMPLES "builds examples" ON)
5656
option(DNNL_BUILD_TESTS "builds tests" ON)
57-
option(DNNL_BUILD_FOR_CI "specifies whether DNNL library should be built for CI" OFF)
57+
option(DNNL_BUILD_FOR_CI "specifies whether oneDNN library should be built for CI" OFF)
5858
option(DNNL_WERROR "treat warnings as errors" OFF)
5959

6060
set(DNNL_INSTALL_MODE "DEFAULT" CACHE STRING
6161
"specifies installation mode; supports DEFAULT or BUNDLE.
6262
63-
When BUNDLE option is set DNNL will be installed as a bundle
63+
When BUNDLE option is set oneDNN will be installed as a bundle
6464
which contains examples and benchdnn.")
6565

6666
set(DNNL_CODE_COVERAGE "OFF" CACHE STRING
@@ -98,7 +98,7 @@ set(DNNL_ARCH_OPT_FLAGS "HostOpts" CACHE STRING
9898
# ======================
9999

100100
option(DNNL_ENABLE_JIT_PROFILING
101-
"Enable registration of DNNL kernels that are generated at
101+
"Enable registration of oneDNN kernels that are generated at
102102
runtime with Intel VTune Amplifier (on by default). Without the
103103
registrations, Intel VTune Amplifier would report data collected inside
104104
the kernels as `outside any known module`."

cmake/win/TBBConfig.cmake

+1-1
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ if (NOT _tbbmalloc_proxy_ix EQUAL -1)
3434
endif()
3535
endif()
3636

37-
# DNNL changes: use TBBROOT to locate Intel TBB
37+
# oneDNN changes: use TBBROOT to locate Intel TBB
3838
# get_filename_component(_tbb_root "${CMAKE_CURRENT_LIST_FILE}" PATH)
3939
# get_filename_component(_tbb_root "${_tbb_root}" PATH)
4040
if (NOT TBBROOT)

0 commit comments

Comments
 (0)