Unable to Access High-End GPUs (A100/H100) Despite Research Credits #15902
Barbhuiya12
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi team,
I received a research grant of approximately 1,000 GCP credits, but I am currently unable to access high-end GPUs such as A100 or H100. My requests for these GPUs have been declined repeatedly for over a month, and I have not received a clear explanation or resolution.
Use case
Training a large-scale deep learning model (~100 million parameters)
Dataset size: multiple terabytes
The training workload is research-critical and time-sensitive
Current limitation
Only L4 GPU access (24 GB VRAM) has been granted
This is not sufficient for the workload:
One training epoch takes approximately 12 hours
Full training would take several weeks or longer
Memory constraints significantly limit batch size and training efficiency
Questions and request for guidance
What are the approval criteria for accessing A100 or H100 GPUs under research credits?
Is there a formal justification or review process that I should follow?
Are there any alternative programs, quota increases, or temporary approvals available for research workloads of this scale?
If access to A100/H100 GPUs is not possible, are there recommended configurations or optimizations to make training feasible?
At present, this research cannot realistically progress using only L4 GPUs. Any clarification, guidance, or escalation path would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions