-
-
Notifications
You must be signed in to change notification settings - Fork 218
Roadmap #16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I want to get #14 with the sanity check + Travis implemented by the end of the day or tomorrow, so yeah that's on my mind :). Can you elaborate on what you mean by your second point? There are already a number of implementations using the test suite (including my own). |
Oh and, not sure if this is included in the type of roadmap you had in mind, but I'd like to get specs added for whatever nooks and crannies of the draft 3 schema that I've overlooked (getting some more eyes on that would help :) and then to start adding / moving specs to test JSON Pointer / references, and then after that start on draft 4. |
Anything that needs doing are acceptable as signposts on the roadmap, in no particular order either. Feel free to edit the first post to add new items and move completed items to a "completed" list perhaps. following up on No2 shortly |
Number 2 might blur the boundaries of scope perhaps. What I am suggesting is that we collect a list of supported libraries/json-schema validators and that we do the compatibility test implementations for each. This would help with collecting data for the sites implementations page allowing us to make indisputable recommendations based on first hand experience and the numbers don't lie. If out of scope, perhaps a better approach is to create sub project for each to be maintainable separately and allowing their CI tests to be executed and reported on separately. Perhaps something that would be in scope is an executor of sorts. I command which will run through the tests and in turn passes as command line argument the json uri to validate. considering the exit status and compiling a report of the results. This would simplify adoption by minimizing the effort required to execute tests and producing normative reports. We could go one step further perhaps and capture these results on a central database which we can query again for previously mentioned stats requirement. Thoughts? Has any libraries, that you know of, adopted these test assets themselves yet, as of date? |
Aha, OK think I'm getting it a bit more. I like the idea of being able to quantify how well implementations that are listed in the site perform on the suite a lot. That definitely sounds good to me, The executor also sounds good. So you mean something like
where jsonschema_suite consecutively invoked the command If you want to see something using this, my validator loads and runs these tests here and essentially this comprises 90% of my suite. The full list is in the README, the other validators basically do similar things with jsunit and whatever Haskell thing. |
Some of this has been done, so I'm gonna move the remaining idea here to its own ticket. |
@Julian have you given the road ahead much thought?
Some food for thought:
Lets elaborate...
The text was updated successfully, but these errors were encountered: