-
-
Notifications
You must be signed in to change notification settings - Fork 7.8k
[ENH]: Add self.__eq__() to Text object #23273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Seems reasonable, however I think there are a couple of judgement calls we will need to make about what part of the state participate in deciding equality and that reasonable people will disagree about which way to go. On the "clearly participate" side I would put the string itself and the font size, but everything else seems ambiguous to me. The position is complicated because it is run through the transform (which is what de-facto sets the coordinate system that the position is in) so if we are strict about the positions only being comparable if the transform is the same, that is clear, but probably not all that useful (as now the only You could also compare the position by going all the way through the transform which would let text in different figures be equal (so axis labels of the same position axes in different figures that are the same size) but not the same string axis labels on different axes in the same figure. It also adds a weird coupling to the overall figure size which seems wrong. The other thing I see problems with is the font / font spec. I do not recall off the top of my head how aggressively / when we resolve the selected font. Because there is the heuristics for finding the font closest to what the user asked for, it is possible to have more than one input (in the worst case, random strings which will all fall back to the default font as we think that "wrong font" is better than "no text") will result in the same font. This is also seems tractable, either we do it early and I'm worried for no reason or we pay the cost of fully resolving the fonts when we do the comparison. |
Actually, I think we should not add such an (Also, it would be rather strange to special-case Texts to have their own |
If |
Hence, one can possibly add one of two things (if we want to make that easier):
I guess the second is probably more appropriate? Then the question is if we have enough benefit of it locally? |
That is the way to go. It does not create new API surface.
We can let such functionality emerge as needed: If you do a thing more than twice, consider whether there is a pattern that we want to extract into a function. That function could also start in |
Although the consensus seems to be not to implement |
One of the pandas devs expressed interest in an equality method, which makes me think that it should be public & in testing - especially if it's written as a general |
I think a minimal version of this looks like: def test_artist_equality(a, b, properties, attributes):
for p in properties:
# TODO deal with numpy arrays
assert getattr(a, f'get_{p}')() == getattr(b, f'get_{p}')()
for attr in attributes:
# TODO support numpy arrays
assert getattr(a, attr) == getattr(b, attr) which only uses public API (but takes a very literal view of "equal"). However, moving from def test_artist_equality(a, b, properties: Dict[str, Callable[[Any, Any], bool]]):
for k, func in properties.items():
assert func(getattr(a, f'get_{p}')() , getattr(b, f'get_{p}')())
# or
import operator
def test_artist_equality(def test_artist_equality(a, b, properties, comparitors: Dict[str, Callable[[Any, Any], bool]]):
for k in properties:
func = comparitors.get(p, operator.eq)
assert func(getattr(a, f'get_{p}')() , getattr(b, f'get_{p}')()) but that would get pretty awkward to use so I expect it would grow a bunch of sensible defaults depending on the key (and the associated complexity). In that case it may be simpler for users write the loop/hard code the case they want in the test themselves (or their own helper if they need to do it a bunch of times) rather than to try and understand the general function we provide! In addition to the variations on how to normalize / "equal" heuristic per property/attribute, I could also see an alternate API where instead of "include lists" we pass "exclude lists" (at least for the "properties"). I do not think we have felt a need for this internally as we either are only comparing a small number of properties/attributes or we rely on "it rendered the same" to be the ultimate arbiter (as what the user sees is what actually matters), however I see why for testing pandas' plotting neither of those are good options. It is not clear to me that we can sort out "the right API" without a good motivating use case (or multiple). If pandas has that use case I think this should start as a (private) helper function in the pandas test suite and once it is stabilized / proved to be useful, then we can pull it back to Matplotlib (mostly for practical reasons...I assume pandas does not want to wait for what ever version we release this function in to be their oldest tested version of Matplotlib!). |
I think part of the reason that the function doesn't yet exist is that it requires an understanding of matplotlib artist semantics and internals (as seen in this discussion) that might be out of scope for pandas. What about either asking to expand the scope of https://github.com/matplotlib/pytest-mpl or creating a small standalone testing library that initially contains this function and then maybe grows other testing utilities? |
Problem
When writing test for Matplotlib plots one often needs to check if the text is showing what's expected.
Adding an
__eq__()
would make this much simpler to do.Proposed solution
add
self.__eq__()
to Text objectThe text was updated successfully, but these errors were encountered: