CUDA

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Here are 111 public repositories matching this topic...
Burn is a next generation Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability.
-
Updated
Jul 24, 2025 - Rust
Ecosystem of libraries and tools for writing and executing fast GPU code fully in Rust.
-
Updated
Jul 12, 2025 - Rust
Deep learning in Rust, with shape checked tensors and neural networks
-
Updated
Jul 23, 2024 - Rust
The Hacker's Machine Learning Engine
-
Updated
Jul 22, 2024 - Rust
Safe rust wrapper around CUDA toolkit
-
Updated
Jun 27, 2025 - Rust
A machine learning library for Rust.
-
Updated
Aug 19, 2024 - Rust
NviWatch: A blazingly fast rust based TUI for managing and monitoring NVIDIA GPU processes
-
Updated
Jun 26, 2025 - Rust
seaweedfs implemented in pure Rust
-
Updated
Apr 1, 2025 - Rust
Safe Rust wrapper for the NVIDIA Management Library
-
Updated
Jul 2, 2025 - Rust
A Deep Learning and preprocessing framework in Rust with support for CPU and GPU.
-
Updated
Oct 3, 2023 - Rust
Calibration software for the Murchison Widefield Array (MWA) radio telescope
-
Updated
Jul 15, 2025 - Rust
LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!
-
Updated
Jul 27, 2023 - Rust
Created by Nvidia
Released June 23, 2007
- Followers
- 262 followers
- Website
- github.com/topics/cuda
- Wikipedia
- Wikipedia