There seems to be a myth that relational databases are unable to store ontologies http://en.wikipedia.org/wiki/Ontology. In fact CASE tools such as Oracle's CASE*Designer and CASE*Designer have been storing meta data in meta models (put simply a model of a model) describing how businesses worked since 1988. Using Barker's notation http://en.wikipedia.org/wiki/Barker's_Notationthe core entity relationship model for Oracle's CASE tools is simply:
Figure H-1 "Simple Metal Model" 1
Oracle's CASE tools externally express views based on tabular representation of this. The next stage more generic meta model is the entity relationship diagram for 'thing'
Figure 8.35 "The Ultimate in Generic Models" 1
This is, unsurprisingly, very similar to the entity relationship diagram that underlies WordNet 3.0 say and I contend that any ontology can be expressed via this notation. I further contend that if it cannot there is something fundamentally wrong with how an ontology is being stored. 1) Because the brain itself is merely a collection of neurons and interconnections. 2) That there is probably a failure to correctly normalise and make generic the data model for the ontology.
1. Richard Barker (1990). CASE Method: Entity Relationship Modelling. Reading, MA: Addison-Wesley Professional. ISBN 0-201-41696-4.
Ideas in Progress
Why is it so difficult to get a realistic Turing like response back from AIML? I believe it lies in the fact that AIML does not deal with concepts but instead deals with literal responses only extended by what is possible by text string pattern matching and substitution.
Clearly literal responses can go at least part of the way to satisfying a Turing test and appear good in very limited environments however I doubt there will be much progress without some kind of fundamental change. Undoubtedly AIML is like is it because it is easy to pattern match text strings but it always feel like something is missing when talking to an AIML bot a lack of something that is probably felt universally and which probably has resulted in AIML’s lack of traction in the real world..
I suppose someone is going to have to sit down and work out a conceptual model for language and to me it looks like the people behind WordNet http://en.wikipedia.org/wiki/WordNet are slowly getting there. I thought Cyc http://en.wikipedia.org/wiki/Cyc may have been going somewhere in the past but now I’m not so sure.
Some of my notes: Build up conceptual meta ideas by cross correlating different languages WordNets via foreign language dictionaries: thus finding all interconnected via cross linkages all words in all languages associated with a concept. Big problem with this will be homonyms but they may be obvious as cross linkages would only occur in one language. Thus you would need at least 3 languages WordNets
Word, idiom, synonym set, sense, explanation, meta language, concept, meta concept, mega concept, class, subclass, superclass, abstract class, evocation “how strongly does concept1 evoke concept2?”, relationship types.
Exemplification (chair-furniture) http://en.wikipedia.org/wiki/Exemplification Application (wolf-ferocious) similarity People do it effortlessly polysemous http://en.wikipedia.org/wiki/Polysemy One solution: sense clustering, underspecification But clustering often involves mutually exclusive criteria (semantics, syntax, frames, domains) morpheme http://en.wikipedia.org/wiki/Morpheme glosses http://en.wikipedia.org/wiki/Gloss