Browse Source

Update README.md to emphasize CUDA requirement and remove CPU fallback references

master
mht 2 weeks ago
parent
commit
d704cda9c0
  1. 23
      README.md

23
README.md

@ -14,7 +14,7 @@ The project consists of two main components:
- CMake (3.18 or higher)
- C++17 compatible compiler
- LibTorch (PyTorch C++ API)
- CUDA (optional, for GPU acceleration)
- **CUDA (required)** - This implementation requires CUDA and does not support CPU-only execution
## Building the Project
@ -28,8 +28,8 @@ chmod +x build.sh
```
This will:
1. Check for CUDA availability
2. Download LibTorch if not already installed
1. Check for CUDA availability (and exit if not found)
2. Download LibTorch with CUDA support if not already installed
3. Configure the project with CMake
4. Build the project
5. Install the executable to the `bin/` directory
@ -50,16 +50,12 @@ cmake --build . --config Release
To run the demo application:
```bash
# Set the library path to include LibTorch
LD_LIBRARY_PATH=$HOME/libtorch/lib:$LD_LIBRARY_PATH ./bin/tracking_demo
```
Or use the provided script:
```bash
# Make sure CUDA is properly set up in your environment
./run_demo.sh
```
The script will check for CUDA availability and set up the necessary environment variables before running the demo.
## Project Structure
- `cimp/`: Main C++ implementation
@ -73,10 +69,11 @@ Or use the provided script:
- `ltr/`: Reference Python implementation
- `bin/`: Built executables
## Known Issues
## Implementation Notes
- The PrRoIPooling implementation requires CUDA, but there's a fallback CPU implementation
- Some CUDA operations may fail on certain GPU configurations; the code includes fallbacks
- The PrRoIPooling implementation requires CUDA and has no CPU fallback
- All tensor operations are performed on CUDA devices
- The tracker is optimized for GPU execution only
## Comparing Python and C++ Implementations

Loading…
Cancel
Save