Skip to content

Conversation

@rushikeshjadhav
Copy link
Contributor

Added test_linstor_sr_fail_host to simulate a crash of a non-master host.

  • Chooses a host within a LINSTOR SR pool and simulate crash using sysrq-trigger.
  • Verifies VM boot and shutdown on all remaining hosts during the outage, and confirms recovery of the failed host for VM placement post-reboot.
  • Ensures SR scan consistency post-recovery.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the test fails while the SR is in a failed state, could this leave the pool in a bad state?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the test fails and SR goes bad, I don't see an easy way to recover from it. There could be scenario based troubleshooting required before cleaning up. However, principally single host failure in XOSTOR should be tolerable and this test should catch in case failures.

We probably should use nested tests for this so that in case the pool goes bad, we can just wipe and start with a new clean one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or the SR used for physical tests should be a throw-away one too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are block devices involved (LVM, drbd, tapdisk which gets blocked on IO), and an improper teardown needs careful inspection and recover or a harsh wipe of everything + reboots. Host failure is still tolerated better than a disk/LVM failure.

XOSTOR SR is hardly a throw-away.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well if we use those block devices for nothing else than this test, we can easily blank them to restart from scratch. We do that for all local SRs, and in some way a Linstor SR is "local to the pool". If our test pool is the sole user, it looks throw-away to me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this happens (manually) when test needs clean start. If it's acceptable then we can add it into prepare_test or similar so that manual script is not required.

@rushikeshjadhav
Copy link
Contributor Author

@ydirson do we need new fixture to handle VM reboots (during host failed and recovered state)? https://github.com/xcp-ng/xcp-ng-tests/pull/313/files#diff-e40824d600ab1c5614cf60bf13e30d8bea1634a03c0df205b9cb1a15239a8505R162-R164

hosts.remove(sr.pool.master)
# Evacuate the node to be deleted
try:
random_host = random.choice(hosts) # TBD: Choose Linstor Diskfull node
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
random_host = random.choice(hosts) # TBD: Choose Linstor Diskfull node
random_host = random.choice(hosts) # TBD: Choose Linstor Diskful node

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Fail non master host from the same pool Linstor SR.
Ensure that VM is able to boot and shutdown on all hosts.
"""
import random
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

random remains a common module, perhaps we should put it in global import in case a new future function uses it in this module?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@stormi
Copy link
Member

stormi commented Nov 24, 2025

Ping @rushikeshjadhav. What's the status after the various comments made?

@rushikeshjadhav
Copy link
Contributor Author

rushikeshjadhav commented Dec 1, 2025

Ping @rushikeshjadhav. What's the status after the various comments made?

Will rebase and update the PR.

@Lankou66
Copy link

Ping @rushikeshjadhav. What's the status after the various comments made?

Will rebase and update the PR.

Hello @rushikeshjadhav , any update about the review and the rebase made above ?

… host.

- Chooses a host within a LINSTOR SR pool and simulate crash using sysrq-trigger.
- Verifies VM boot and shutdown on all remaining hosts during the outage,
and confirms recovery of the failed host for VM placement post-reboot.
- Ensures SR scan consistency post-recovery.

Signed-off-by: Rushikesh Jadhav <[email protected]>
@rushikeshjadhav rushikeshjadhav force-pushed the feat-storage-linstor-625 branch from 73e2c06 to ebb1e7d Compare January 26, 2026 11:37
@rushikeshjadhav rushikeshjadhav requested review from a team as code owners January 26, 2026 11:37
@rushikeshjadhav
Copy link
Contributor Author

Ping @rushikeshjadhav. What's the status after the various comments made?

Will rebase and update the PR.

Hello @rushikeshjadhav , any update about the review and the rebase made above ?

Its ready.

"""
sr = linstor_sr
vm = vm_on_linstor_sr
# Ensure that its a single host pool and not multi host pool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment, as I understand it, says the opposite of what is checked the line below.
IMO, for simple code like this one, a comment doesn't enhance the clarity and might become out of sync with the code. I would just remove it.

vm.wait_for_os_booted()
vm.shutdown(verify=True)

# Wait for radom_host to come online
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The random host is rebooting while we test that the hosts can run a vm, but I don't see anything to ensure that the random host was still unusable at that point.
Could you add an assert random_host.is_ssh_up() or something like that here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are not checking the VM resiliency only on "Host down" scenario but intension was to let other hosts take over (via HA or anything) and not be contingent on "random host" state. Does it work?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand. Is this not a test to validate the availability of the Linstor SR when one of the Linstor hosts is down?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants