-
-
Notifications
You must be signed in to change notification settings - Fork 37
adding insert_assert
function and make pytest-plugin
#51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #51 +/- ##
==========================================
- Coverage 99.72% 98.02% -1.71%
==========================================
Files 11 12 +1
Lines 737 911 +174
Branches 188 226 +38
==========================================
+ Hits 735 893 +158
- Misses 2 15 +13
- Partials 0 3 +3
Continue to review full report at Codecov.
|
98f3b31
to
59aafe2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is so amazing! My 2 cents are:
- Probably worth exploring https://insta.rs and maybe take some inspiration. In particular, integrating redactions from insta.rs with dirty-equals types would (1) be amazing for UUIDs/timestamps and (2) justify this being part of dirty-equals!
- It would be nice to have an option like
--insert-assert-reset
(delete the existing assert statement and uncomment theinsert_assert(...)
) and/or--insert-assert-replace
(replace the existingassert ...
with a new one
parser.addoption( | ||
'--insert-assert-fail', | ||
action='store_true', | ||
default=False, | ||
help='Fail tests which include one or more insert_assert() calls', | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be the default option. Unless users explicitly say "I want you to modify my code and not fail" we should fail any time we are making changes. I think that's what https://insta.rs does (it might also be good to explore other areas of their design that can be copied). So maybe instead we could have --insert-assert-pass: don't fail if changes are made
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried that, for me at least it was very annoying.
I guess at least we should have an env var that means you can change the behavior for an entire session.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a config in pyproject.toml:
[tool.insertassert]
pass: true
I do see the value in having it pass by default. But I feel strongly that failing if we change any code should be the default behavior.
I tried that, for me at least it was very annoying.
What was the annoying part? I think you'd run it once get a bunch of noisy failures, look at the diff and be like "alright this is okay" -> accept the changes.
I like how insta takes it one step further and has a generate -> review -> apply
stage. Maybe we could do something like:
# insert_assert(resp.json())
assert resp.json() == {"foo": "bar"}
Run with --insert-assert-replace
# insert_assert(resp.json())
# proposed: assert resp.json() == {}
assert resp.json() == {"foo": "bar"}
That test run fails and since the original assertion was not replaced subsequent test runs fail by default unless you run with--insert-assert-preview
which uses the proposed
version.
Once you've reviewed the proposed changes (maybe via git diff
, although we could come up with some fancy tools) you run --insert-assert-accept
which yields:
# insert_assert(resp.json())
assert resp.json() == {}
Alternatively you could just run with --insert-assert-accept
and --insert-assert-pass
from the starting point and just get to the end result with no failing tests directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On [tool.insertassert]
, we can just use pytest's addopts
. But with that pytest will fail if dirty-equals (or devtools, see below) is not installed.
I'm happy to add more features over time, but for initial release I'd prefer to keep it as simple as possible.
I'd be happy to add --insert-assert-pass
as well as --insert-assert-fail
and default to --insert-assert-fail
, that way I'll just set --insert-assert-pass
myself.
The option option is to add a new result status, see pytest_report_teststatus
, problem is I tried that and pytest-sugar didn't know how to deal with the custom status.
Then again, pytest-sugar needs some love I'm worried I might have to stop using it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on pytest-sugar Teemu/pytest-sugar#224
I think it would be really interesting to enable something like this: # insert_assert(
# resp.json(),
# repl={
# '[]["id"]': IsUUID(),
# '[0]["created_at"]': IsDatetime(
# approx=datetime.utcnow(),
# delta=timedelta(seconds=1),
# )
# }
# )
expected = [
{
"id": IsUUID(),
"created_at": IsDatetime(
approx=datetime.utcnow(),
delta=timedelta(seconds=1),
),
"price": 123.0,
}
]
assert resp.json() == expected (source: https://twitter.com/adriangb01/status/1575196309891719168) |
On reflection, I think I'll move this to https://github.com/samuelcolvin/python-devtools. Rationale:
|
replaced by samuelcolvin/python-devtools#126, would be nice to try to add support for dirty-equals types (even though this has moved to another repo), in particular:
|
See https://gist.github.com/samuelcolvin/625a1655c2a73e02469fc3c27285ca42
To use now:
install:
With that you should be able to add
insert_assert()
into your tests and it should "just work ™",Example code:
Still to do: