Probabilistic grammatical category disambiguation is an easy problem as NLP problems go, being nicely grounded in well-understood mathematics. Today, the main applications are:                                       

                                      
A. In information Retrieval

A1 Tags enable Named Entity Recognition (Bikel, Leek, McCallum, Freitag). Tags underly finding phrases by means of Chunking or Light Parsing without attempting a complete sentential parse. After completing the chunking, one may assign a semantic role to these phrases based on the verb, possibly via links with an ontology - again see Cutting et. al. and Weischedel et. al.
A2 Question answering from unstructured text -  Who shot John Kennedy?  (Lee Oswald)
A3 Information Extraction: Filling templates with phrases from full text, e.g.fill obituary template from newspaper text (Loritz, Chen, Humphrey at LexisNexis, Inc.).
A4 Tagging is a prerequisite for Word Sense Disambiguation.
                                         
                                       
B. In text-to-speech and speech recognition

B1 Lead is categorially ambiguous, being a noun (NN) or a verb (VBS) .  When a text is tagged,  a text-to-speech architecture will "know" what pronunciation to pick.
B2 Likewise, [lEd] can be disambiguated for speech based on HMMs.

In conclusion of this minor survey of what became of full-scale free text parsing efforts and what is going on today, then, it should be obvious NLP is useful in processing the increasing amount of online text, on the Internet as well as for Intranets and private electronic libraries.  Imagine the currently extant scenario for  research on the genome. The number of published papers in this field is increasing so rapidly biologists find it difficult to keep up with the relevant literature, a problem addressed by, for instance,
Ingenuity see here also.  Similar research acitivites are going on at the Lawrence Hunter Lab of the University of Colorado.

Finding argument structure, that is,
finding NP's and PP's (by chunking or partial parsing) and their relation to verbs in sentences of  technical papers in the life-sciences, along with semantic/case  role assignment to these arguments, possibly via links to intermediate ontologies is a worthwhile effort. It'd be state-of-the-art language 'understanding' with useful applications. The semantic role assignment might  be performed with a world-wide standardized ontology driven by the W3 C Semantic Web efforts. Such world-wide ontologies for the life-sciences are already in existence.  After bioinformatics, other fields will surely follow suit. One may think of international law, for instance, or any of the sciences where document repositories accumulate.  The standardization effort alone may guide the way to a truly international ontology, a garguantuan Wordnet, playing a central role in machine translation and document exploration.

This page has barely scratched the surface of what else Natural Language Processing can do besides POS-tagging, Named Entity Recognition, partial parsing, and automated assignment of semantic roles to syntactic constituents.  Link Analysis to facilitate analyzing text,
automated summarization, classification and clustering - all hold much promise as to what this exciting interdisciplinary field can accomplish in the area of working with  electronic text collections.

previous
publications
1