Crates.io | caffe2-miopen |
lib.rs | caffe2-miopen |
version | 0.1.5-alpha.0 |
source | src |
created_at | 2023-03-01 21:41:34.610228 |
updated_at | 2023-03-25 11:19:23.313657 |
description | xxx |
homepage | |
repository | https://github.com/kleb6/caffe2-rs |
max_upload_size | |
id | 798346 |
size | 103,044 |
caffe2-miopen
Crate in the process of being translated from C++ to Rust. Some function bodies may still be undergoing translation.
BNParamType
ScalingParamType
These types represent the parameters of a batch normalization operation. Batch normalization is a technique used in deep learning to improve the convergence and stability of neural networks. It works by normalizing the inputs to a layer to have zero mean and unit variance, which reduces the internal covariate shift that occurs during training.
check_miopen_versions
miopen_compiled_version
miopen_get_error_string
miopen_runtime_version
These functions and constants are used to check the version of the MIOpen library, which is an implementation of the OpenCL Neural Network (clDNN) library optimized for AMD GPUs. Checking the version of the library can be useful for debugging and ensuring compatibility with other components of a system.
default
descriptor
descriptor_from_order_and_dims
These functions are used to create and manipulate
MIOpen tensor descriptors, which are objects that
describe the size and layout of tensors used in
a neural network. default
creates a default
tensor descriptor, while descriptor
creates
a descriptor with specific dimensions and data
type. descriptor_from_order_and_dims
creates
a descriptor with specific dimensions and layout
order.
do_run_with_type
run_on_device
These functions execute a MIOpen operation on
a specific device. do_run_with_type
executes an
operation with a specific data type, while
run_on_device
executes an operation on
a specific device.
drop
in
k_one
k_zero
These constants and types are used in MIOpen
operations. in
is a type used to represent an
input tensor, while k_one
and k_zero
are
constants used to represent the scalar values
1 and 0, respectively. drop
is a Rust keyword
that is used to release resources associated with
an object when it goes out of scope.
miopenTensorDescWrapper
miopenTypeWrapperF32
miopenTypeWrapperHalf
These types are used to wrap MIOpen tensor descriptors and data types. They are used to provide a Rust interface to the MIOpen library and to ensure type safety in Rust code that interacts with MIOpen.
MIOPENActivationGradientOp
MIOPENActivationOp
MIOPENActivationOpBase
These types represent activation functions used in
neural networks. Activation functions are used to
introduce nonlinearity into neural networks, which
allows them to model complex relationships between
inputs and outputs. The MIOPENActivationOpBase
type provides a base class for activation
functions, while the MIOPENActivationOp
and
MIOPENActivationGradientOp
types provide
implementations for specific activation functions
and their gradients, respectively.
MIOPENWrapper
MIOpenState
MIOpenWorkspace
PerGPUMIOPENStates
SyncedMIOPENState
These types represent the state of a MIOpen
operation and the workspace used by MIOpen to
store intermediate results. The MIOpenWrapper
type provides a Rust interface to the MIOpen
library, while the MIOpenState
and
SyncedMIOPENState
types provide state management
for MIOpen operations. The MIOpenWorkspace
type
represents a workspace used by MIOpen to store
intermediate results, and the PerGPUMIOPENStates
type manages MIOpen state for multiple GPUs.
execute
get
inline_miopen_handle
is
miopen_handle
miopen_states
reset
with_miopen_state
These functions
3> thanks! could you please continue what you cut off saying?
Apologies for that! Here's the rest of the description:
execute
executes a MIOpen operation, while get
retrieves a value from a MIOpen
tensor. inline_miopen_handle
creates an inline
MIOpen handle, which can be used to execute
a MIOpen operation inline with other Rust
code. is
is a type trait used to determine if
a type is a MIOpen operation, while
miopen_handle
retrieves a MIOpen
handle. miopen_states
retrieves the current
MIOpen states, while reset
resets a MIOpen
operation to its initial
state. with_miopen_state
executes a closure with
a MIOpen state.
RecurrentBaseOp
RecurrentGradientOp
RecurrentOp
RecurrentParamAccessOp
RecurrentParamOpMode
These types represent recurrent neural network
(RNN) operations. RNNs are a class of neural
networks that are designed to operate on
sequential data, such as time-series data or
natural language text. The RecurrentBaseOp
type
provides a base class for RNN operations, while
the RecurrentOp
and RecurrentGradientOp
types
provide implementations for specific RNN
operations and their gradients, respectively. The
RecurrentParamAccessOp
type provides access to
the parameters of an RNN, while the
RecurrentParamOpMode
type represents the mode of
an RNN operation.
TensorDescriptors
descs
These types represent collections of MIOpen tensor
descriptors. TensorDescriptors
is a type used to
represent a collection of descriptors, while
descs
is a Rust macro used to create
a collection of descriptors.
In summary, caffe2-miopen
is a Rust crate that
provides bindings to the MIOpen library, which is
used to accelerate deep learning computations on
AMD GPUs. The crate provides Rust interfaces to
many MIOpen operations and types, as well as
functions for managing MIOpen state and
workspace. The crate is still in the process of
being translated from C++ to Rust, but many of the
core functions are already available.
7> what is miopen?
MIOpen (pronounced "my-open") is a deep learning library developed by AMD that provides optimized implementations of deep learning algorithms and operators for GPUs. MIOpen is designed to work with a variety of deep learning frameworks, including Caffe2, TensorFlow, and PyTorch.
MIOpen provides a range of features such as optimized implementation of convolutional neural networks, support for mixed precision training, and automatic tuning of performance based on hardware characteristics. MIOpen is intended to provide high performance deep learning computations for AMD GPUs, and is particularly well-suited for large-scale training of deep neural networks.
8> what are some of its core primitives?
Some of the core primitives in MIOpen include:
Convolutional layers: MIOpen provides highly optimized implementations of convolutional neural network (CNN) layers, which are used extensively in deep learning models for image and video recognition.
Activation functions: MIOpen provides optimized implementations of a variety of activation functions, including Rectified Linear Unit (ReLU), Sigmoid, and Hyperbolic Tangent (Tanh), which are commonly used in deep learning models.
Pooling layers: MIOpen provides highly optimized implementations of pooling layers, which are used to reduce the spatial dimensions of feature maps in CNNs.
Batch normalization: MIOpen provides optimized implementations of batch normalization, which is a technique used to improve the stability and convergence of deep neural networks.
Recurrent layers: MIOpen provides highly optimized implementations of recurrent neural network (RNN) layers, which are used extensively in natural language processing (NLP) and speech recognition applications.
Dropout: MIOpen provides optimized implementations of dropout, which is a regularization technique used to prevent overfitting in deep neural networks.
Overall, MIOpen is designed to provide a set of optimized building blocks for deep learning models, which can be combined in a variety of ways to build complex neural networks.