Skip to content

Can't start training, BufferTooBig #304

@andresilva9

Description

@andresilva9

I have used the same script to train other datasets, which worked even with more Max Splats and more Max Resolution. This time, I am using a much larger dataset, with 64 Gbs. This is the error I am getting, it doesn't even start the training. Should I zip the file to be less heavy?
brush:0.3.0 \ /workdir/data \ --total-steps 50000 \ --max-resolution 1920 \ --max-splats 8000000 \ --export-every 5000 \ --export-path /workdir/output \ --export-name "export_{iter}.ply" non-network local connections being added to access control list ·️ Completed loading ✅ evaluating every 1000 steps ℹ️ Completed loading thread 'main' panicked at /usr/local/cargo/git/checkouts/cubecl-058c47895211d464/13ebbd5/crates/cubecl-runtime/src/client.rs:314:41: called Result::unwrap()on anErrvalue: BufferTooBig(2290420416) note: run withRUST_BACKTRACE=1environment variable to display a backtrace

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions