RDFa Vocabularies

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
16 messages Options
Reply | Threaded
Open this post in threaded view
|

RDFa Vocabularies

Manu Sporny
I had an action to generate spec text for pulling in external vocabulary
documents. Here's the first cut:

This document outlines an extension to the the syntax and processing
rules that would allow the list of reserved words recognized by RDFa to
be extended. The goal of this mechanism is to make authoring RDFa easier
for web developers and designers.

http://rdfa.digitalbazaar.com/specs/rdfa-vocab-20100111.html

-- manu

[1] http://www.w3.org/2010/01/07-rdfa-minutes.html#action03

--
Manu Sporny (skype: msporny, twitter: manusporny)
President/CEO - Digital Bazaar, Inc.
blog: Monarch - Next Generation REST Web Services
http://blog.digitalbazaar.com/2009/12/14/monarch/

Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Ivan Herman-2
Manu,

thanks for writing this down.

All in all, I am a little bit wary of this proposal. It does put an
extra load on implementation that is non negligible.... I also have
questions:

- implementations will be incompatible as for what the format of the
document at the value of @vocab is. The text says:

[[[
The attribute's value should contain one or more space-separated URLs.
Each URL, when dereferenced, should provide a document marked up with
the RDFa Vocabulary
]]]

but it does not specify the serialization formats. There can be many:
RDF/XML, RDFa, Turtle, XHTML+GRDDL... This may lead to deployment issues
for users. It is also the source of complication for implementations:
would they have to include an RDF/XML parser into their RDFa
implementation (after all, besides RDFa, RDF/XML is, up to now, the only
standard serialization format for RDF; well, one can argue that
XHTML+GRDDL is another one...)

I wonder whether we should not restrict ourselves to RDFa as an accepted
format. Implementations still have to, sort of, recursively call
themselves to interpret vocabulary files, but at least no further parser
is necessary.

- there is a danger for cycles. Say

Doc A with URI <http://www.example/org/A>:

<bla vocab="http://www.example/org/B">....

Doc B with URI <http://www.example/org/B>:

<blabla vocab="http://www.example/org/A">....

then, well, we may have a problem. We have to have a standard strategy
of action for this case.

Ivan



On 2010-1-12 05:01 , Manu Sporny wrote:

> I had an action to generate spec text for pulling in external vocabulary
> documents. Here's the first cut:
>
> This document outlines an extension to the the syntax and processing
> rules that would allow the list of reserved words recognized by RDFa to
> be extended. The goal of this mechanism is to make authoring RDFa easier
> for web developers and designers.
>
> http://rdfa.digitalbazaar.com/specs/rdfa-vocab-20100111.html
>
> -- manu
>
> [1] http://www.w3.org/2010/01/07-rdfa-minutes.html#action03
>
--

Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF   : http://www.ivan-herman.net/foaf.rdf
vCard  : http://www.ivan-herman.net/HermanIvan.vcf


smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Manu Sporny
Ivan Herman wrote:
> It does put an extra load on implementation that is non negligible.

Yes, that's very true. Remote retrieval of vocabulary documents makes it
impossible to do a Javascript implementation unless there is a new
browser feature added (for retrieving plain text @vocab documents). For
example:

var vocab = document.getVocabulary("http://example.org/vocab");

The http://example.org/vocab document would have to exist in the current
document in a @vocab attribute in order for the browser to allow
retrieving it. We may want to consider adding this requirement to the
RDFa API document.

The alternative is native, full-browser support for RDFa, which would
take some time to implement in all browsers. We may have more luck
getting a simple API call into the browser.

> I wonder whether we should not restrict ourselves to RDFa as an accepted
> format. Implementations still have to, sort of, recursively call
> themselves to interpret vocabulary files, but at least no further parser
> is necessary.

Right, I agree that the document should be marked up in RDFa. The only
reason the text isn't more specific about XHTML+RDFa or HTML+RDFa was
because we may want to serve SVG+RDFa or ODF+RDFa from that URL via
content negotiation.

I'll try and put some language in there that makes it more clear that
the RDFa Vocabulary document should be marked up in RDFa.

> - there is a danger for cycles.

Keeping a [vocabulary stack] around and pushing/popping vocabs processed
via @vocab would be one way of solving that problem. When you process
each @vocab URL:

1. Check the stack to ensure that the URL doesn't already exist on the
   stack. If it does, a cycle has been detected and you don't process
   the vocab URL.
2. If the URL doesn't exist in the stack, you push the URL onto the
   stack and process the document. Pop the vocab URL off of the stack
   when you're done processing the document.

Ben Adida tweeted:
> suggest making the predicates non-RDFa specific, it's just
> tokenization, it's not RDFa-specific.

So, instead of rdfa:term, we could do something like (pick one):

parser:token
curie:token
curie:prefix
curie:reference
xhv:token

I agree, it would be better to specify something broader than just RDFa.
Perhaps Microdata and RDFa could share these vocabulary descriptions :)

*ducks*

-- manu

--
Manu Sporny (skype: msporny, twitter: manusporny)
President/CEO - Digital Bazaar, Inc.
blog: Monarch - Next Generation REST Web Services
http://blog.digitalbazaar.com/2009/12/14/monarch/

Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Ivan Herman-2


On 2010-1-12 16:13 , Manu Sporny wrote:

> Ivan Herman wrote:
>> It does put an extra load on implementation that is non negligible.
>
> Yes, that's very true. Remote retrieval of vocabulary documents makes it
> impossible to do a Javascript implementation unless there is a new
> browser feature added (for retrieving plain text @vocab documents). For
> example:
>
> var vocab = document.getVocabulary("http://example.org/vocab");
>
> The http://example.org/vocab document would have to exist in the current
> document in a @vocab attribute in order for the browser to allow
> retrieving it. We may want to consider adding this requirement to the
> RDFa API document.
>
That is a tall order. I am not a JS expert but isn't it correct that
this restrictions is deeply rooted in the browser environment? Ie, we
would then have an implicit requirement that a full blown JS
implementation would not work as a library developed by a third party.
That would be real bad:-(

> The alternative is native, full-browser support for RDFa, which would
> take some time to implement in all browsers. We may have more luck
> getting a simple API call into the browser.
>
>> I wonder whether we should not restrict ourselves to RDFa as an accepted
>> format. Implementations still have to, sort of, recursively call
>> themselves to interpret vocabulary files, but at least no further parser
>> is necessary.
>
> Right, I agree that the document should be marked up in RDFa. The only
> reason the text isn't more specific about XHTML+RDFa or HTML+RDFa was
> because we may want to serve SVG+RDFa or ODF+RDFa from that URL via
> content negotiation.
Ah, good point. Actually, as this is going to be in RDFa 1.1, we may
refer to the general RDFa in XML document or HTML5/RDFa as the possible
target and that could cover all variations...

>
> I'll try and put some language in there that makes it more clear that
> the RDFa Vocabulary document should be marked up in RDFa.
>
>> - there is a danger for cycles.
>
> Keeping a [vocabulary stack] around and pushing/popping vocabs processed
> via @vocab would be one way of solving that problem. When you process
> each @vocab URL:
>
> 1. Check the stack to ensure that the URL doesn't already exist on the
>    stack. If it does, a cycle has been detected and you don't process
>    the vocab URL.
> 2. If the URL doesn't exist in the stack, you push the URL onto the
>    stack and process the document. Pop the vocab URL off of the stack
>    when you're done processing the document.
>
Yeah, I realized that this is not a specification but an implementation
issue.

> Ben Adida tweeted:
>> suggest making the predicates non-RDFa specific, it's just
>> tokenization, it's not RDFa-specific.
>
> So, instead of rdfa:term, we could do something like (pick one):
>
> parser:token
> curie:token
> curie:prefix
> curie:reference
> xhv:token
>
> I agree, it would be better to specify something broader than just RDFa.
> Perhaps Microdata and RDFa could share these vocabulary descriptions :)
>
> *ducks*
>
:-)

Ivan


> -- manu
>

--

Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF   : http://www.ivan-herman.net/foaf.rdf
vCard  : http://www.ivan-herman.net/HermanIvan.vcf


smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Philip Taylor-5
Ivan Herman wrote:

> On 2010-1-12 16:13 , Manu Sporny wrote:
>> Ivan Herman wrote:
>>> It does put an extra load on implementation that is non negligible.
>> Yes, that's very true. Remote retrieval of vocabulary documents makes it
>> impossible to do a Javascript implementation unless there is a new
>> browser feature added (for retrieving plain text @vocab documents). For
>> example:
>>
>> var vocab = document.getVocabulary("http://example.org/vocab");
>>
>> The http://example.org/vocab document would have to exist in the current
>> document in a @vocab attribute in order for the browser to allow
>> retrieving it. We may want to consider adding this requirement to the
>> RDFa API document.
>>
>
> That is a tall order. I am not a JS expert but isn't it correct that
> this restrictions is deeply rooted in the browser environment?

If I'm understanding the discussion correctly, then the problem is that
browser security is based on the same-origin policy, which means scripts
running on a page generally can't access data from a different origin
(where "origin" is basically domain+port+scheme). So a script that's
used on http://whatever.example/ can't access data from
http://example.org/vocab (because that would allow the first site to
access private data on the user's intranet, or private data that other
sites associate with the user via cookies).

CORS (http://dev.w3.org/2006/waf/access-control/) allows servers to
relax that restriction, so example.org could be configured to allow
access from anyone, in which case it could be read with XMLHttpRequest
(in Firefox 3.5+ and Safari 4+; and with XDomainRequest in IE8+).

I'd expect an API like getVocabulary that doesn't use CORS and ignores
the same-origin policy would be rejected as insecure, since it can be
used to reveal information that would otherwise be inaccessible to scripts.

--
Philip Taylor
[hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Manu Sporny
Philip Taylor wrote:

>> That is a tall order. I am not a JS expert but isn't it correct that
>> this restrictions is deeply rooted in the browser environment?
>
> If I'm understanding the discussion correctly, then the problem is that
> browser security is based on the same-origin policy, which means scripts
> running on a page generally can't access data from a different origin
> (where "origin" is basically domain+port+scheme). So a script that's
> used on http://whatever.example/ can't access data from
> http://example.org/vocab (because that would allow the first site to
> access private data on the user's intranet, or private data that other
> sites associate with the user via cookies).
>
> CORS (http://dev.w3.org/2006/waf/access-control/) allows servers to
> relax that restriction, so example.org could be configured to allow
> access from anyone, in which case it could be read with XMLHttpRequest
> (in Firefox 3.5+ and Safari 4+; and with XDomainRequest in IE8+).
>
> I'd expect an API like getVocabulary that doesn't use CORS and ignores
> the same-origin policy would be rejected as insecure, since it can be
> used to reveal information that would otherwise be inaccessible to scripts.

Our CTO and I just had a side discussion about CORS, resulting with each
of us reading the updated spec. After reading through it, we both agree
with Philip - that whatever mechanism is used should probably be, or at
least be based on, CORS.

If we depend on CORS, then a simple XMLHttpRequest would work to
retrieve the remote RDFa Vocabulary document (as long as the remote
server is configured to respond with "Access-Control-Allow-Origin: *"
when attempting to retrieve the vocabulary document). Also note that
this issue only applies to RDFa Vocabularies that are not kept on the
same server as the HTML+RDFa document.

So CORS+XMLHttpRequest is a good solution to ensure that RDFa Javascript
implementations are still possible for RDFa 1.1 in all of the current,
popular web browsers. Thanks, Philip :)

-- manu

--
Manu Sporny (skype: msporny, twitter: manusporny)
President/CEO - Digital Bazaar, Inc.
blog: Monarch - Next Generation REST Web Services
http://blog.digitalbazaar.com/2009/12/14/monarch/

Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Toby Inkster-4
In reply to this post by Manu Sporny
On Mon, 2010-01-11 at 23:01 -0500, Manu Sporny wrote:

> http://rdfa.digitalbazaar.com/specs/rdfa-vocab-20100111.html

I'd like to outline an alternative solution. This is mostly based on
ideas I've seen on this list - Mark and Ben's ideas mainly IIRC, though
I could be misattributing them.

1. Introduce a new scoped attribute @default-prefix. Actually you'd
probably want a shorter name, but I'll stick with this now as it's
pretty clear what it does. This would set the default prefix for CURIEs
that contain no colon. (The keywords found in @rel and @rev are not
CURIEs, so it does not affect those.)

So for example:

  <address about="" default-prefix="http://xmlns.com/foaf/0.1/" rev="made">
    <a typeof="Person" rel="homepage" href="http://tobyinkster.co.uk/"
       property="name">Toby Inkster</a>
  </address>

2. Permit but do not require RDFa processors to perform a limited subset
of OWL reasoning on the document, taking into account data from the
document obtained by dereferencing the default-prefix.

Assuming that <doc> is the document graph and <dp> is the default prefix
graph, the suggested reasoning to be carried out can be summed up in
SPARQL as:

        CONSTRUCT { ?subject ?property ?object . }
        WHERE {
          GRAPH <dp>  { ?localalias owl:equivalentProperty ?property . }
          GRAPH <doc> { ?subject ?localalias ?object . }
        }

        CONSTRUCT { ?subject a ?class . }
        WHERE {
          GRAPH <dp>  { ?localalias owl:equivalentClass ?class . }
          GRAPH <doc> { ?subject a ?localalias . }
        }

This covers the use case of people wanting to use vocabs that combine
terms from multiple established vocabularies. They'd simply create their
own mix-and-match vocab using a couple of fairly basic OWL terms to note
equivalences to established ones, and then set that vocab as the default
prefix (perhaps on the root <html> element).

By making the reasoning optional, triples (albeit perhaps less useful
ones) can still be obtained from the document in the case when
default-prefix is not dereferencable (be that because the processor is
running inside a sandboxed environment, or it's running offline, or the
domain name used in the default prefix has lapsed) or cannot be parsed.

--
Toby A Inkster
<mailto:[hidden email]>
<http://tobyinkster.co.uk>


Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Ivan Herman-2
In reply to this post by Manu Sporny
Pfew:-) That makes the whole approach much more realistic! If we rely on
RDFa serialization on the vocabulary format than the load on
implementers is much less.

Yes, I might look into implementing this as a test:-)

ivan

On 2010-1-12 19:30 , Manu Sporny wrote:

> Philip Taylor wrote:
>>> That is a tall order. I am not a JS expert but isn't it correct that
>>> this restrictions is deeply rooted in the browser environment?
>>
>> If I'm understanding the discussion correctly, then the problem is that
>> browser security is based on the same-origin policy, which means scripts
>> running on a page generally can't access data from a different origin
>> (where "origin" is basically domain+port+scheme). So a script that's
>> used on http://whatever.example/ can't access data from
>> http://example.org/vocab (because that would allow the first site to
>> access private data on the user's intranet, or private data that other
>> sites associate with the user via cookies).
>>
>> CORS (http://dev.w3.org/2006/waf/access-control/) allows servers to
>> relax that restriction, so example.org could be configured to allow
>> access from anyone, in which case it could be read with XMLHttpRequest
>> (in Firefox 3.5+ and Safari 4+; and with XDomainRequest in IE8+).
>>
>> I'd expect an API like getVocabulary that doesn't use CORS and ignores
>> the same-origin policy would be rejected as insecure, since it can be
>> used to reveal information that would otherwise be inaccessible to scripts.
>
> Our CTO and I just had a side discussion about CORS, resulting with each
> of us reading the updated spec. After reading through it, we both agree
> with Philip - that whatever mechanism is used should probably be, or at
> least be based on, CORS.
>
> If we depend on CORS, then a simple XMLHttpRequest would work to
> retrieve the remote RDFa Vocabulary document (as long as the remote
> server is configured to respond with "Access-Control-Allow-Origin: *"
> when attempting to retrieve the vocabulary document). Also note that
> this issue only applies to RDFa Vocabularies that are not kept on the
> same server as the HTML+RDFa document.
>
> So CORS+XMLHttpRequest is a good solution to ensure that RDFa Javascript
> implementations are still possible for RDFa 1.1 in all of the current,
> popular web browsers. Thanks, Philip :)
>
> -- manu
>
--

Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF   : http://www.ivan-herman.net/foaf.rdf
vCard  : http://www.ivan-herman.net/HermanIvan.vcf


smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

KANZAKI Masahide-2
In reply to this post by Manu Sporny
Hi,

I wonder it's not very good idea to use RDF in order to map shorthand
names to global terms, because it will add useless annotations to the
terms. For example, if http://example.org/vocab contains a triple

<http://xmlns.com/foaf/0.1/name> rdfa:term "name" .

then this will add an extra description to FOAF vocabulary. Of course
anyone can say anything on the Web, but probably it's not a good
practice to tweak someone else's vocabulary on the global space, IMHO.

I guess some local mapping mechanisms, e.g. JSON, would be better in
this case, and might be easier to process ?


2010/1/13 Manu Sporny <[hidden email]>:

>> I wonder whether we should not restrict ourselves to RDFa as an accepted
>> format. Implementations still have to, sort of, recursively call
>> themselves to interpret vocabulary files, but at least no further parser
>> is necessary.
>
> Right, I agree that the document should be marked up in RDFa. The only
> reason the text isn't more specific about XHTML+RDFa or HTML+RDFa was
> because we may want to serve SVG+RDFa or ODF+RDFa from that URL via
> content negotiation.
>
> I'll try and put some language in there that makes it more clear that
> the RDFa Vocabulary document should be marked up in RDFa.

--
@prefix : <http://www.kanzaki.com/ns/sig#> . <> :from [:name
"KANZAKI Masahide"; :nick "masaka"; :email "[hidden email]"].

Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Mark Birbeck-4
Hi Masahide,

> I wonder it's not very good idea to use RDF in order to map shorthand
> names to global terms, because it will add useless annotations to the
> terms. For example, if http://example.org/vocab contains a triple
>
> <http://xmlns.com/foaf/0.1/name> rdfa:term "name" .
>
> then this will add an extra description to FOAF vocabulary. Of course
> anyone can say anything on the Web, but probably it's not a good
> practice to tweak someone else's vocabulary on the global space, IMHO.

> I guess some local mapping mechanisms, e.g. JSON, would be better in
> this case, and might be easier to process ?

I agree, and think that whatever architecture we create, it *must* be
able to cope with a JSON format for the token mappings, even if other
formats are also provided.

I'll explain.

A related technique is the 'context' object that I use in RDFj [1]. A
full example of RDFj looks something like this:

  {
    "context": {
      "base": "<http://example.org/about>",
      "token": {
        "title": "http://xmlns.com/foaf/0.1/title",
        "maker": "http://xmlns.com/foaf/0.1/maker",
        "name": "http://xmlns.com/foaf/0.1/name",
        "homepage": "http://xmlns.com/foaf/0.1/homepage"
      }
    },
    "$": "<>",
      "title": "Anna's Homepage",
      "maker": {
        "name": "Anna Wilder",
        "homepage": "<>"
      }
  }

The idea is that although the JSON object can be used directly in
applications, e.g.:

  if (obj.name === "Anna Wilder") {
    ...
  }

it should also be possible to interpret this object as a set of RDF
triples. The context object is what makes it possible to interpret the
data in this way.

I've shown the context object as part of the data here, but my
thinking is that it should also be possible to load this context
object separately from the actual data (perhaps via a separate service
request).

So although RDFj is not the issue here, you can see that once you
allow this context object to exist independently of the data that it
is used with, it also becomes clear that this context object is no
different to the vocab that we need for RDFa.

Note that this is much the same as using @prefix in N3; it's important
to have a non-triple way to express data that is used to decode
triples.

Regards,

Mark

[1] <http://code.google.com/p/backplanejs/wiki/Rdfj>

Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Ivan Herman-2
In reply to this post by KANZAKI Masahide-2
That is actually a good point... But it is also a matter of how the RDF
vocabulary is used. Ie, you are right that having an RDF statement of
the sort

<http://xmlns.com/foaf/0.1/name> rdfa:term "name"

is not a good idea. But having something like

[
   a rdfa:Term ;
   rdfa:uri "http://xmlns.com/foaf/0.1/name" ;
   rdfa:term "name"
]

would be o.k...

I regard the Json encoding as a somewhat issue.

Ivan

On 2010-1-13 10:52 , KANZAKI Masahide wrote:

> Hi,
>
> I wonder it's not very good idea to use RDF in order to map shorthand
> names to global terms, because it will add useless annotations to the
> terms. For example, if http://example.org/vocab contains a triple
>
> <http://xmlns.com/foaf/0.1/name> rdfa:term "name" .
>
> then this will add an extra description to FOAF vocabulary. Of course
> anyone can say anything on the Web, but probably it's not a good
> practice to tweak someone else's vocabulary on the global space, IMHO.
>
> I guess some local mapping mechanisms, e.g. JSON, would be better in
> this case, and might be easier to process ?
>
>
> 2010/1/13 Manu Sporny <[hidden email]>:
>>> I wonder whether we should not restrict ourselves to RDFa as an accepted
>>> format. Implementations still have to, sort of, recursively call
>>> themselves to interpret vocabulary files, but at least no further parser
>>> is necessary.
>>
>> Right, I agree that the document should be marked up in RDFa. The only
>> reason the text isn't more specific about XHTML+RDFa or HTML+RDFa was
>> because we may want to serve SVG+RDFa or ODF+RDFa from that URL via
>> content negotiation.
>>
>> I'll try and put some language in there that makes it more clear that
>> the RDFa Vocabulary document should be marked up in RDFa.
>
--

Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF   : http://www.ivan-herman.net/foaf.rdf
vCard  : http://www.ivan-herman.net/HermanIvan.vcf


smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Mark Birbeck-4
Hi Ivan,

On Wed, Jan 13, 2010 at 10:35 AM, Ivan Herman <[hidden email]> wrote:

> That is actually a good point... But it is also a matter of how the RDF
> vocabulary is used. Ie, you are right that having an RDF statement of
> the sort
>
> <http://xmlns.com/foaf/0.1/name> rdfa:term "name"
>
> is not a good idea. But having something like
>
> [
>   a rdfa:Term ;
>   rdfa:uri "http://xmlns.com/foaf/0.1/name" ;
>   rdfa:term "name"
> ]
>
> would be o.k...

One way to come at this is to ask whether you might do anything with
these triples.

For example, might we reason across tokens, so that the URI we map on
Thursday might be different to the one we use on Sunday?

I wouldn't rule this out, but I'm having a hard time thinking of a
situation where it would be better to change the token mapping than it
would to actually change the underlying data ("on Thursday I work at
the office; on Sunday I work at home").

Given this, I don't see much point in going out of our way to create
tokens in RDF. Just as N3 uses @prefix and RDF/XML uses @xmlns,
there's nothing necessarily wrong with describing data that is used
for encoding triples with a language that is not itself used for
encoding triples.

Unless you want to reason over the prefix mappings I would suggest
that all we need are name-value pairs.


> I regard the Json encoding as a somewhat issue.

I would argue that the JSON encoding of RDF is crucial to the next
phase of the semantic web (just as allowing RDF to be transported via
HTML was crucial for the current phase), but that's a discussion for
another day.

The question here is narrower, and concerns whether we can enable a
broad deployment of token-mapping, across diverse systems; I would
suggest that we can only do that if JSON is part of the mix.

Regards,

Mark

--
Mark Birbeck, webBackplane

[hidden email]

http://webBackplane.com/mark-birbeck

webBackplane is a trading name of Backplane Ltd. (company number
05972288, registered office: 2nd Floor, 69/85 Tabernacle Street,
London, EC2A 4RR)

Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Ivan Herman-2


On 2010-1-13 11:54 , Mark Birbeck wrote:

> Hi Ivan,
>
> On Wed, Jan 13, 2010 at 10:35 AM, Ivan Herman <[hidden email]> wrote:
>> That is actually a good point... But it is also a matter of how the RDF
>> vocabulary is used. Ie, you are right that having an RDF statement of
>> the sort
>>
>> <http://xmlns.com/foaf/0.1/name> rdfa:term "name"
>>
>> is not a good idea. But having something like
>>
>> [
>>   a rdfa:Term ;
>>   rdfa:uri "http://xmlns.com/foaf/0.1/name" ;
>>   rdfa:term "name"
>> ]
>>
>> would be o.k...
>
> One way to come at this is to ask whether you might do anything with
> these triples.
>
> For example, might we reason across tokens, so that the URI we map on
> Thursday might be different to the one we use on Sunday?
>
> I wouldn't rule this out, but I'm having a hard time thinking of a
> situation where it would be better to change the token mapping than it
> would to actually change the underlying data ("on Thursday I work at
> the office; on Sunday I work at home").
>
> Given this, I don't see much point in going out of our way to create
> tokens in RDF. Just as N3 uses @prefix and RDF/XML uses @xmlns,
> there's nothing necessarily wrong with describing data that is used
> for encoding triples with a language that is not itself used for
> encoding triples.
>
> Unless you want to reason over the prefix mappings I would suggest
> that all we need are name-value pairs.
>
This is all true. Ie, from the point of view of the goal of the whole
story using RDF or not is not really relevant. I guess (well, I cannot
speak in the name Manu) that the reason may simply be that this is a
data structure that we already have, why inventing another one (simple
that may be). But I am not particularly attached to RDF for this.

>
>> I regard the Json encoding as a somewhat issue.
>

Sorry, the line should have said "I regard the Json encoding as a
somewhat different issue". Crucial word missing:-(

> I would argue that the JSON encoding of RDF is crucial to the next
> phase of the semantic web (just as allowing RDF to be transported via
> HTML was crucial for the current phase), but that's a discussion for
> another day.

Absolutely: this is a discussion for another day:-) That is what I
wanted to say...

>
> The question here is narrower, and concerns whether we can enable a
> broad deployment of token-mapping, across diverse systems; I would
> suggest that we can only do that if JSON is part of the mix.
>

I am not a huge fan of JSon, probably because I use python where of
course there are json converters but it remains a different syntax to
python structures altogether. But, at the end of the day, I do not care
too much. Using RDFa for that purpose has some advantages, though, ie,
that one can provide a human readable format for the terms being used.
That may be very beneficial for, say, foaf or dc...

Ivan


> Regards,
>
> Mark
>
> --
> Mark Birbeck, webBackplane
>
> [hidden email]
>
> http://webBackplane.com/mark-birbeck
>
> webBackplane is a trading name of Backplane Ltd. (company number
> 05972288, registered office: 2nd Floor, 69/85 Tabernacle Street,
> London, EC2A 4RR)
--

Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF   : http://www.ivan-herman.net/foaf.rdf
vCard  : http://www.ivan-herman.net/HermanIvan.vcf


smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

RE: RDFa Vocabularies

Brian Peterson-2
In reply to this post by Manu Sporny
The document indicates that comments from the public are welcome, so if you don't mind...

I'm part of a group that is promoting a Linked Data architecture at our organization, and we're advocating XHTML+RDFa as the primary response format for web services. We find there are many benefits for using XHTML+RDFa for data requests as well as web pages.

The CURIE syntax appeared at first to be a small step away from URIs, but they've proven useful for reducing the size of the docs, so I've come to accept them. Plus, the URIs can be recovered easily enough.

Using this @vocab framework is a giant leap away from URIs. At least CURIEs could be processed with just local information; now there could be many additional dereferences required just to parse the document for data. This has the potential to kill attempts to use XHTML+RDFa for Linked Data because of the possible inefficiencies.

Doesn't this introduce the possibility of name clashes? If you download the @vocab docs and some have different mappings for a term, which one do you take? All?

I think this proposed addition should be declined, even if the specification covers this issue of terms with multiple mappings. RDF uses URI references specifically to avoid this issue (among other reasons, of course). I would prefer to see other solutions to help authors rather than diluting or weakening the RDF basis for RDFa.

It appears to me that this change would come at a high cost, particularly when there are other ways of helping authors with semantic markup. Perhaps editors could be made to assist with semantic markup? Maybe allow for a vocab mapping shortcut but use a post-processing script that replaces the terms with URIs or CURIEs. That way the cost is paid up front, once, and consumers don't have to worry about it. This would allow for authoring convenience without requiring the additional complexity, inefficiency, and ambiguity to parsing.

Brian Peterson


-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf Of Manu Sporny
Sent: Monday, January 11, 2010 11:02 PM
To: RDFa mailing list
Subject: RDFa Vocabularies

I had an action to generate spec text for pulling in external vocabulary
documents. Here's the first cut:

This document outlines an extension to the the syntax and processing
rules that would allow the list of reserved words recognized by RDFa to
be extended. The goal of this mechanism is to make authoring RDFa easier
for web developers and designers.

http://rdfa.digitalbazaar.com/specs/rdfa-vocab-20100111.html

-- manu

[1] http://www.w3.org/2010/01/07-rdfa-minutes.html#action03

--
Manu Sporny (skype: msporny, twitter: manusporny)
President/CEO - Digital Bazaar, Inc.
blog: Monarch - Next Generation REST Web Services
http://blog.digitalbazaar.com/2009/12/14/monarch/




Reply | Threaded
Open this post in threaded view
|

Re: RDFa Vocabularies

Ivan Herman-2
Brian,

I think you have legitimate issues that have to be handled (eg, what
happens if there is a name clash). However... I do not thing anybody at
any time proposed the @vocab as a replacement of the current mechanism.
In other words, applications can stay with URIs and CURIEs and ignore
this mechanism if they want. Is this the way you understood it?

Sincerely

ivan

On 2010-1-17 06:35 , Brian Peterson wrote:

> The document indicates that comments from the public are welcome, so if you don't mind...
>
> I'm part of a group that is promoting a Linked Data architecture at our organization, and we're advocating XHTML+RDFa as the primary response format for web services. We find there are many benefits for using XHTML+RDFa for data requests as well as web pages.
>
> The CURIE syntax appeared at first to be a small step away from URIs, but they've proven useful for reducing the size of the docs, so I've come to accept them. Plus, the URIs can be recovered easily enough.
>
> Using this @vocab framework is a giant leap away from URIs. At least CURIEs could be processed with just local information; now there could be many additional dereferences required just to parse the document for data. This has the potential to kill attempts to use XHTML+RDFa for Linked Data because of the possible inefficiencies.
>
> Doesn't this introduce the possibility of name clashes? If you download the @vocab docs and some have different mappings for a term, which one do you take? All?
>
> I think this proposed addition should be declined, even if the specification covers this issue of terms with multiple mappings. RDF uses URI references specifically to avoid this issue (among other reasons, of course). I would prefer to see other solutions to help authors rather than diluting or weakening the RDF basis for RDFa.
>
> It appears to me that this change would come at a high cost, particularly when there are other ways of helping authors with semantic markup. Perhaps editors could be made to assist with semantic markup? Maybe allow for a vocab mapping shortcut but use a post-processing script that replaces the terms with URIs or CURIEs. That way the cost is paid up front, once, and consumers don't have to worry about it. This would allow for authoring convenience without requiring the additional complexity, inefficiency, and ambiguity to parsing.
>
> Brian Peterson
>
>
> -----Original Message-----
> From: [hidden email] [mailto:[hidden email]] On Behalf Of Manu Sporny
> Sent: Monday, January 11, 2010 11:02 PM
> To: RDFa mailing list
> Subject: RDFa Vocabularies
>
> I had an action to generate spec text for pulling in external vocabulary
> documents. Here's the first cut:
>
> This document outlines an extension to the the syntax and processing
> rules that would allow the list of reserved words recognized by RDFa to
> be extended. The goal of this mechanism is to make authoring RDFa easier
> for web developers and designers.
>
> http://rdfa.digitalbazaar.com/specs/rdfa-vocab-20100111.html
>
> -- manu
>
> [1] http://www.w3.org/2010/01/07-rdfa-minutes.html#action03
>
--

Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF   : http://www.ivan-herman.net/foaf.rdf
vCard  : http://www.ivan-herman.net/HermanIvan.vcf


smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

RE: RDFa Vocabularies

Brian Peterson-2
Hi Ivan,

I did understand that @vocab would be in addition to the current mechanisms. And if it becomes part of the standard, then we would have to discourage people in our organization from using it. But if it is part of the standard, then there would be the possibility that there might be a commercial product that utilizes the mechanism, so we might have to deal with it. This would make XHTML+RDFa less attractive as a standard for what we are trying to accomplish.

It would still complicate the standard even if it is optional. If the intent is to make it easier for authors to use semantic markup, it seems there are another approaches that would accomplish the same goal without affecting the standard. A post-processing script that replaces the tokens allows authors to take advantage of the shortcut mappings without impacting performance or introducing this complication. This approach has the potential of making it even easier for authors since it could automate decisions about using @rel vs @property, @content vs @resource, safe curie vs curie, or empty @content vs no @content vs XMLLiteral @content.

There are many other possible additions to RDFa that can make it more efficient for semantic markup and encoding RDF (eg. markup on table columns vs each row, lists as containers, SPARQL named graphs, and reification in general). Perhaps it would be best to focus on making these enhancements first and then evaluating if the standard could handle further complications without becoming unwieldy.

Brian

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf Of Ivan Herman
Sent: Sunday, January 17, 2010 4:10 AM
To: Brian Peterson
Cc: 'RDFa mailing list'
Subject: Re: RDFa Vocabularies

Brian,

I think you have legitimate issues that have to be handled (eg, what
happens if there is a name clash). However... I do not thing anybody at
any time proposed the @vocab as a replacement of the current mechanism.
In other words, applications can stay with URIs and CURIEs and ignore
this mechanism if they want. Is this the way you understood it?

Sincerely

ivan

On 2010-1-17 06:35 , Brian Peterson wrote:

> The document indicates that comments from the public are welcome, so if you don't mind...
>
> I'm part of a group that is promoting a Linked Data architecture at our organization, and we're advocating XHTML+RDFa as the primary response format for web services. We find there are many benefits for using XHTML+RDFa for data requests as well as web pages.
>
> The CURIE syntax appeared at first to be a small step away from URIs, but they've proven useful for reducing the size of the docs, so I've come to accept them. Plus, the URIs can be recovered easily enough.
>
> Using this @vocab framework is a giant leap away from URIs. At least CURIEs could be processed with just local information; now there could be many additional dereferences required just to parse the document for data. This has the potential to kill attempts to use XHTML+RDFa for Linked Data because of the possible inefficiencies.
>
> Doesn't this introduce the possibility of name clashes? If you download the @vocab docs and some have different mappings for a term, which one do you take? All?
>
> I think this proposed addition should be declined, even if the specification covers this issue of terms with multiple mappings. RDF uses URI references specifically to avoid this issue (among other reasons, of course). I would prefer to see other solutions to help authors rather than diluting or weakening the RDF basis for RDFa.
>
> It appears to me that this change would come at a high cost, particularly when there are other ways of helping authors with semantic markup. Perhaps editors could be made to assist with semantic markup? Maybe allow for a vocab mapping shortcut but use a post-processing script that replaces the terms with URIs or CURIEs. That way the cost is paid up front, once, and consumers don't have to worry about it. This would allow for authoring convenience without requiring the additional complexity, inefficiency, and ambiguity to parsing.
>
> Brian Peterson
>
>
> -----Original Message-----
> From: [hidden email] [mailto:[hidden email]] On Behalf Of Manu Sporny
> Sent: Monday, January 11, 2010 11:02 PM
> To: RDFa mailing list
> Subject: RDFa Vocabularies
>
> I had an action to generate spec text for pulling in external vocabulary
> documents. Here's the first cut:
>
> This document outlines an extension to the the syntax and processing
> rules that would allow the list of reserved words recognized by RDFa to
> be extended. The goal of this mechanism is to make authoring RDFa easier
> for web developers and designers.
>
> http://rdfa.digitalbazaar.com/specs/rdfa-vocab-20100111.html
>
> -- manu
>
> [1] http://www.w3.org/2010/01/07-rdfa-minutes.html#action03
>

--

Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF   : http://www.ivan-herman.net/foaf.rdf
vCard  : http://www.ivan-herman.net/HermanIvan.vcf