Skip to content

Conversation

@bearomorphism
Copy link
Collaborator

…and, remove type alias

Closes #1678, #1679

Manually tested and added test cases, the result LGTM.

@bearomorphism
Copy link
Collaborator Author

@Lee-W I already ran pytest tests/commands/test_version_command.py -n auto --regen-all to regenerate the test files, but the pipeline still fails. (The tests passed on my machine)

Do you have any ideas why this happen

version = f"{version_scheme.minor}"
out.write(version.major)
return
if self.arguments.get("minor"):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize now that this creates problems with the (not)monotonic kind of versions (and possible non-semver). I'm not sure what to do about it.

I think for now it's fine that if you diverge too much from semver in your custom version scheme, then you won't get the full range of features.

@woile
Copy link
Member

woile commented Dec 12, 2025

There was a way to run all the pre-commits, but I don't remember how it works 😅



@pytest.mark.parametrize(
"next_version, current_version, expected_version",
Copy link
Member

@woile woile Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe rename next_version to next_increment?
In the user facing cli, it makes sense semantically: --next=MAJOR (give me the next major).

assert expected_version in captured.out


def test_next_version_invalid_version(config, capsys):
Copy link
Member

@woile woile Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about having a parametrize of the combinations that should fail:

@pytest.mark.parametrize(
    "args",
    [
        # incomplete list:
        {"next"},
        # ...
        {"--verbose", "next"}
    ]
)

out.error("Invalid version.")
return

if next_str := self.arguments.get("next"):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what would happen here if I run cz version --project --next? I would expect that to work

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That also works.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behavior is the same as cz version --next=*

@woile
Copy link
Member

woile commented Dec 12, 2025

Nice to see this going forward 🎉

@codecov
Copy link

codecov bot commented Dec 13, 2025

Codecov Report

❌ Patch coverage is 93.33333% with 5 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (v4-11-0@2072f8e). Learn more about missing BASE report.

Files with missing lines Patch % Lines
commitizen/version_increment.py 81.25% 3 Missing ⚠️
commitizen/commands/version.py 94.59% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             v4-11-0    #1724   +/-   ##
==========================================
  Coverage           ?   97.65%           
==========================================
  Files              ?       61           
  Lines              ?     2643           
  Branches           ?        0           
==========================================
  Hits               ?     2581           
  Misses             ?       62           
  Partials           ?        0           
Flag Coverage Δ
unittests 97.65% <93.33%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@bearomorphism
Copy link
Collaborator Author

@Lee-W I already ran pytest tests/commands/test_version_command.py -n auto --regen-all to regenerate the test files, but the pipeline still fails. (The tests passed on my machine)

Do you have any ideas why this happen

I see why the test failure happens. Please see my other PR #1726 .

@bearomorphism
Copy link
Collaborator Author

I will rebase this branch after #1726 is merged. The test failure should be resolved then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants