-
Notifications
You must be signed in to change notification settings - Fork 157
Allow terms to expand to multiple IRIs #142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Indeed, I was asking about this also on IRC a few days ago but it seems like it's not supported (yet). In a way this is similar to |
An alternative would be content negotiation or having different URLs for documents expressed with different vocabularies. I proposed (last paragraph of that comment) to use a |
Thanks for the suggestion. I have not seen that pattern (conneg based on which predicates the consumer wants) used before in Linked Data and I agree with Gregg Kellog's sentiment that it isn't something I'm ready to accept. Even if I were on board, I wouldn't feel comfortable relying on such a novel pattern in Drupal core without seeing it used in the wild by others. |
I'm skeptical about optimizing the format to deliver a highly redundant set of triples to increase chances that a client understands it. All the client I've been working on would just store all the triples, this would be quite a mess. I think the situation is different with rdfa where the same html part might be the value of properties with distinct meanings (say "author" and "copyRightsHolder"). Instead of the additional syntax one can express this with owl:sameAs relationships on a semantic level. |
The tools we're targeting don't do reasoning, so relying on owl:sameAs would make it a no go for us. |
RESOLVED: Support a single term expanding to multiple IRIs when an array of |
This was discussed during the call today and some solutions were brought up. Having multiple IRIs for a single term is useful for expressing data, and it's easy to support in the expansion algorithm. It's more of a challenge for the compaction algorithm though, since multiple values have to be compacted into one term when such context is in use (the term doesn't appear as such in the input). The group evaluated the possibility of supporting this feature only during expansion and not compaction, but it would be the first time this would happen in JSON-LD, and it doesn't sound like a good idea. The conneg approach was brought up but rejected. The following resolution was taken:
The details for the compaction algorithm still need to be worked out, but a possible syntax for the context may look like the following:
|
@linclark The way I implemented this in the spec is slightly different from what you asked for - this is the only supported mechanism right now: {"@context": {"term": {"@id": ["dc:title", "schema:name", "foaf:name"]}}} That is, this will not work: {"@context": {"term": ["dc:title", "schema:name"]}} The reasoning I used suggested two things; 1) we should try to only have one way of doing this if we can to reduce all of the patterns that authors /reading/ JSON-LD must be aware of and, 2) this is an advanced feature and we don't want authors accidentally triggering it. The only thing pulling me in the other direction was the nicer syntax that you proposed, but that wasn't enough to mitigate the two concerns in the previous sentence. That said, others in the group might disagree with this approach and you're more than welcome to raise a new issue to support the other syntax in addition to the one added above. |
Yes, I agree that this is advanced usage, and an expanded term definition is called for. |
Thanks for the heads up, the way you propose is fine for us, so no On Aug 19, 2012, at 3:17 AM, Manu Sporny [email protected] wrote: @linclark https://github.com/linclark The way I implemented this in the {"@context": {"term": {"@id": ["dc:title", "schema:name", "foaf:name"]}}} That is, this will not work: {"@context": {"term": ["dc:title", "schema:name"]}} The reasoning I used suggested two things; 1) we should try to only have That said, others in the group might disagree with this approach and you're — |
I reopen this issue until the API algorithms have been updated and we decided how this works in compaction. |
I think this is quite weird, here a mini reasoning mechanism is built into the serialization format spec. Something than can be achieved both with current reasoning tools (owl) as well with current sparql-standards (sparql construct) while making the serialization more compact and readable as requested by #146 was denied with the argument that one should instead use some mechanism provided by some future sparql versions to transform crappy RDF that can nicely be serialized to json-ld back to the actually semantically expressive triples. (From the discussion on http://json-ld.org/minutes/2012-08-07/#topic-3) |
Actually I tend to agree. It complicates things quite a bit without bringing much advantages IMHO. The only argument for it is to save bandwidth when transferring compacted documents. I haven’t tried it but I’m quite sure that compressing the data before transmitting it will result in the same bandwidth-savings. On the other hand, I see it a bit problematic that a document might get x-times larger after expanding - and clients need to do that to use that feature. Maybe we should revisit this feature. |
The only advantage is that you save bandwidth when transferring systematically redundant information. Transferring redundant information is not something that should be encouraged so I personally find it quite absurd to optimize the serialization format for what is more a bad practice rather than a legitimate usecase. |
RESOLVED: The group is committed to support language maps and property generators in JSON-LD 1.0. |
The property-generator round-tripping algorithm is tracked in issue #160. |
RESOLVED: Adopt Gregg Kellogg's property generator algorithm (see issue #160) when expanding/compacting with the following modifications; 1) use subject definitions everywhere when expanding, 2) generate bnode IDs for all subject definitions without an RESOLVED: Add warning language to the JSON-LD Syntax and API specs noting the most problematic issues when working with property generators. RESOLVED: Add a non-normative note to tell developers that their implementations may have a feature that allows all but one node definition created by a property generator to be collapsed into a node reference. |
@lanthaler - quick question on the new property generator algorithms. I'm reviewing them to determine whether or not we can close this issue. You have this text in the spec right now: If active property is a JSON object, i.e., it is a property generator, set active property to the result of performing the Find and Remove Property Generator Duplicates algorithm passing element, property, null for value, the active context, and active property. shouldn't that be "set the value associated with active property"? |
@lanthaler - It seems as if all of the necessary algorithms have been updated to support property generators. Can we close this issue? |
No, it’s correct as it is. The return value of the Find and Remove Property Generator Duplicates algorithm is the property name (or in the worst case the full IRI) that should be used in the compacted result. E.g., you might have three potential property generators A, B, C and it turns out that neither A nor B match, in that case the algorithm deletes the duplicates in all property IRIs associated with the generator C and then returns C so that the compaction algorithm adds the property C to the result with the (de-duplicated) data. |
The algorithms have been updated (issue #160). This one is about the syntax spec which isn’t completely up to date yet. I will fix the missing parts till the end of this week. |
After API spec now also the syntax spec has been updated to add support for property generators. Unless I hear objections I will therefore close this issue in 24 hours. |
RESOLVED: Remove Property Generators from JSON-LD before Last Call due to no developers needing the feature, the feature having a high potential of misuse, and because of the complexity it adds to the specification. |
I've updated both the syntax and the API spec as well as the test suite to remove property generators. Unless I hear objections I will therefore close this in 24 hours. |
Many publishers need to publish in multiple vocabularies so that they can target multiple consumers. The only way that I can see to use multiple vocabularies is to repeat the value, using a different term as the attribute each time.
I would like to see a more concise way to use multiple vocabs.
For example:
In Drupal 8, the current thinking is that we would have a site specific vocabulary automatically generated for each site. However, we also need to allow the user to map to external vocabularies as needed; for example, Schema.org.
If we can only use multiple vocabularies by repeating the values multiple times, I fear it would be too verbose for our use case.
The text was updated successfully, but these errors were encountered: