-
-
Notifications
You must be signed in to change notification settings - Fork 215
RFC: Report on Implementations' Statuses #314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@gregsdennis found https://cburgmer.github.io/json-path-comparison/ for a similar effort on JSON path. |
I've invited him to do the same thing for us (or at the very least let us copy/modify what he's done). |
Looks like it's a copy/modify, if people here are okay with going that direction. |
@cburgmer wrote up some of the logic behind his test suite. It's a really good read and a lot of the reasoning applies well to JSON Path. Primarily of note, there are no "expected" test results. He's truly performing a comparison between implementations and reporting on what he calls the consensus between them, determined by a majority-plus-one of the available implementations agreeing on the output. He does this because there is no specification for JSON Path, so no really "correct" response for any given query. Since we have a specification, we don't need the consensus concept. Our test suite explicitly states the expected result, so we just need to report adherence to that result. The thing I like from his report is the report itself. I'll work on modifying it so that it can at least run my implementation. I'd like other implementors to add their own libraries, but if I can work out how to do some of them, I'll start at the top of the implementation list on the site and work down. |
If it helps you I can call out the complexity in the json-path-comparison:
As 3. probably applies to your goal as well I can clarify this point: |
It would be great to keep the consensus logic though, because this test suite isn't the only data we could compare implementations on. If we have a blackbox setup for various implementations, I'd be happy to set up a fuzzing harness which uses |
|
I'm going to close this as well and say "Bowtie is this". Help + feedback definitely still welcome, or results are continuously here. |
There should be a mechanism by which implementers (currently mentioned in the README or otherwise) can submit their implementation to be automatically tested and displayed on each commit to the test suite.
Specifically, we would report on:
At first thought, an easy implementation may be to have each participating implementation implement a black-box executable which:
Internally then (here in the test repo), we run each of these binaries via a GitHub action, collect the results, and expose them for viewing.
(This issue is very open for discussion from implementers -- comments welcome.)
The text was updated successfully, but these errors were encountered: