-
Notifications
You must be signed in to change notification settings - Fork 8
use proxmoxer and fix bugs #40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for your contribution and sorry it took so long for any reviewer to show up. I'm not overly familiar with the Proxmox modules, but they don't have a dedicated maintainer at the moment.
I remember diagnosing the __utils__ issue on Discord, which is unique to this extension. We should release another v1 v2 with the fixed old code before introducing your breaking changes in a v2 v3, so I took the liberty to patch it in a separate PR. Could you rebase on current main? There are some additional conflicts.
Additionally, I think this change needs more documentation:
- At least
.. versionadded:: 3.0.0/.. versionchanged:: 3.0.0on the module/function docstrings for the important changes (afaict dependency switch, required config and function return values) - A changelog entry for each fixed issue and breaking change. If there are no submitted issues and you don't want to create each one, you can avoid having to specify an issue number by prefixing the changelog entry file name with a
+.
Since it seems you are an active user of this extension and familiar with the code, you could consider becoming a maintainer for saltext-proxmox.
|
|
||
|
|
||
| @patch(_fqn(proxmox.show_instance)) | ||
| @patch(_fqn(proxmox._wait_for_task)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: Patching in the global context should be avoided. I know the current tests are written that way, but it would be nice if we could use this breaking change to improve the situation by defining fixtures:
@pytest.fixture
def wait_for_task_mock():
with patch(_fqn(proxmox._wait_for_task), autospec=True) as wait:
yield wait
# ...
@pytest.mark.usefixtures("wait_for_task_mock")
# ...| upid = "UPID:node1:0016BEC6:568EF5F4:669FB044:qmcreate:101:user@pam!mytoken:" | ||
| mock__get_proxmox_client.return_value.post.return_value = upid | ||
|
|
||
| with patch("salt.utils.cloud.fire_event", MagicMock()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: Afaik MagicMock() is redundant.
| with patch("salt.utils.cloud.fire_event", MagicMock()): | |
| with patch("salt.utils.cloud.fire_event"): |
| }, | ||
| } | ||
|
|
||
| with pytest.raises(SaltCloudSystemExit), patch("salt.utils.cloud.fire_event", MagicMock()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| with pytest.raises(SaltCloudSystemExit), patch("salt.utils.cloud.fire_event", MagicMock()): | |
| with pytest.raises(SaltCloudSystemExit), patch("salt.utils.cloud.fire_event"): |
| with patch("salt.utils.cloud.bootstrap", MagicMock()), patch( | ||
| "salt.utils.cloud.filter_event", MagicMock() | ||
| ), patch("salt.utils.cloud.fire_event", MagicMock()): | ||
| with patch("salt.utils.cloud.fire_event", MagicMock()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| with patch("salt.utils.cloud.fire_event", MagicMock()): | |
| with patch("salt.utils.cloud.fire_event"): |
| with patch("salt.utils.cloud.bootstrap", MagicMock()), patch( | ||
| "salt.utils.cloud.filter_event", MagicMock() | ||
| ), patch("salt.utils.cloud.fire_event", MagicMock()): | ||
| with patch("salt.utils.cloud.fire_event", MagicMock()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| with patch("salt.utils.cloud.fire_event", MagicMock()): | |
| with patch("salt.utils.cloud.fire_event"): |
| "type": "lxc", | ||
| } | ||
|
|
||
| with patch("salt.utils.cloud.fire_event", MagicMock()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| with patch("salt.utils.cloud.fire_event", MagicMock()): | |
| with patch("salt.utils.cloud.fire_event"): |
What does this PR do?
tokenintotoken_nameandtoken_value, and replacingurlwithhost.proxmoxerlibrary to replace the self-built_query()function.technologyparameter is missing in the VM profile.avail_images()now only shows actual images stored in proxmox (images, vztmpl and iso) instead of all datatypes.list_nodes_full()now returns all available data about VMs that can be gathered from proxmox.list_nodes()now useslist_nodes_full()and then reduces its output to only id, image, private_ips, public_ips, size and state.create()for starting the VMsalt.cloud.utils.*to__utils__. reason being that e.g.salt.utils.cloud.filter_event()causes an error whereas__utils__["cloud.filter_event"]()does notWhat issues does this PR fix or reference?
Fixes:
Previous Behavior
requestsand the self-written_query()function to communicate with the proxmox instance.clone()andcreate()didn't wait for task completion which could result in errors in subsequent steps.technologyparameter in a VM profile resulted in erroneous calls to the API.vmidparameter resulted in erroneous calls to the API.avail_images()returned all data stored in proxmox's storage locations instead of only showing images.list_nodes_full()returned the data provided by the resource and config endpoint in a non-standard format.list_nodes()duplicated the calls to the resource and config endpoint done bylist_nodes_full()and then returned the required format.create()sometimes could not start the newly created VM because proxmox takes a few seconds to list the VM via the API.New Behavior
proxmoxerlibrary for all proxmox communication.clone()andcreate()now also wait for task completion similar tostart(),stop()orshutdown()`.technologyparameter in a VM profile now results in a descriptive error.vmidparameter now results in a descriptive error.avail_images()now only shows images stored in proxmox's storage locations.list_nodes_full()now returns all available data about the VMs in the format used bylist_nodes()with the addition of the keysconfigandresourcefor the respective data.list_nodes()now useslist_nodes_full()to gather data and then only returnsid,image,private_ips,public_ips,sizeandstate.create()now implements a retry logic when trying to start the newly created VM to work around proxmox's delay.Merge requirements satisfied?
[NOTICE] Bug fixes or features added to Salt require tests.
Commits signed with GPG?
No
Please review Salt's Contributing Guide for best practices.
See GitHub's page on GPG signing for more information about
signing commits with GPG.