[NLP2RDF] NIF, Stanford Dependency Parses

Chris Roeder chris.roeder at ucdenver.edu
Wed Mar 13 16:53:01 CET 2013


On 3/10/13 8:03 AM, Sebastian Hellmann wrote:
> Hi Chris,
> the project is in a big transition currently.  There is a stable version 1.0 of
> Stanford parser and it is available here:
> https://code.google.com/p/nlp2rdf/downloads/list .
>
> We are currently working on NIF 2.0, which will have a lot of (mostly
> administrative) changes. On Friday, I received  a green light to host the
> ontologies on http://persistence.uni-leipzig.de/nlp2rdf which will be there as
> long as University Leipzig exists. We are also moving to GitHub.
>
> I have started to update the Stanford parser, here:
> https://bitbucket.org/kurzum/nif-core/src
> Note that this is not finished yet, so it probably will not compile, as it is
> still work in progress.
>
> Ideally, the wrapper for Stanford should not be in the NIF repo, but integrated
> into the Stanford Core NLP framework. This would be the most effective, but I
> didn't have the time to look at the Stanford code in detail (especially where
> they keep their (de-) serialization).
Lots of activity! great!
>
> I recently generated java classes for Olia. You can access them by renaming:
> http://olia.nlp2rdf.org/owl/penn.owl ->
> http://olia.nlp2rdf.org/owl/Penn.java
I see the source there, but I'm curious as to what that does and how?
I had a brief look at 1.0, but didn't notice something in the poms
that would generate code like that. I'll have a look at 2.0 on bitbucket.
I've used GitHub and like it.
>
> This is also not perfect, yet and can be done in many other ways. In the end the
> RDF, that comes out, counts.
> So, I would be happy, if you were to tackle any of the problems. Maybe, you
> would like to create the reference implementation for Stanford Core or even
> become the NIF-Stanford maintainer. I am not sure however, whether it matches
> your goals. If you just want some quick results, you can use the old code,
> especially, if you are only interested in English. Having it working out of the
> box for different tagsets and languages would be swell however.
In the short term, I want to experiment with dependency parses and event detection.
In the longer term, I'm interested in maintaining such a project, and I'm encouraged
by the activity. Also, I'd like to learn more about the ontologies used and how
they relate to  the Open Annotation work.
>
> The first step, would be to describe your use case and requirements in more
> detail. We  are collecting this in the wiki:
> http://wiki.nlp2rdf.org/wiki/Use_cases#Use_cases
> http://wiki.nlp2rdf.org/wiki/Requirements#Requirements
> In the end, we will check, whether NIF 2.0 will be able to fullfill the use cases.

Will do. I applied for a wiki user.
>
> All the best,
> Sebastian

thanks
-Chris
>
>
> Am 10.03.2013 00:55, schrieb Roeder, Chris:
> > Hi,
> >
> > I'm just having a look around. I'm interested in getting
> > Stanford dependency parses into RDF. It looks like there
> > is a lot of worthwhile infrastructure here. Is anyone
> > working on this?  Would you like some help?
> >
> > -Chris Roeder
> > _______________________________________________
> > NLP2RDF mailing list
> > NLP2RDF at lists.informatik.uni-leipzig.de
> > http://lists.informatik.uni-leipzig.de/mailman/listinfo/nlp2rdf
> >
>
>
> -- 
> Dipl. Inf. Sebastian Hellmann
> Department of Computer Science, University of Leipzig
> Projects: http://nlp2rdf.org , http://dbpedia.org
> Homepage: http://bis.informatik.uni-leipzig.de/SebastianHellmann
> Research Group: http://aksw.org


-- 
Christophe Roeder | Software Engineer
University of Colorado Anschutz Medical Campus | Computational Bioscience Program
303-724-7562 | chris.roeder at ucdenver.edu | http://compbio.ucdenver.edu



More information about the NLP2RDF mailing list