I propose a method for determining the “Linguistic completeness of information” in a given piece of text.
Meanings of words are either dependent upon other entities and concepts or not. For example, suppose I use the word ‘gave’. Now, ‘gave’ means transfer of something from one place to another. This invokes the dependence or attachment of this word to 3 other entities – ‘something’, ‘one’ and ‘another’. So ‘gave’ is 3x. What does that mean? It means that whenever the word ‘gave’ is present in a piece of text, there have to be these 3 other entities somewhere around – ‘what was given?’, ‘the initial place’ and ‘the final place’.Take another word like say ‘to’. Here what is always attached to a ‘to’ is the destination. That is, ‘to the station‘ or ‘to the house‘ or ‘gave a ball to John‘ etc. So, ‘to’ is 1x. A word like ‘ball’ is self-sufficient – its definition doesn’t “depend” upon the presence of other entities. So it is 0x.
Method : See the definition of each word in the text.
See the variables it is dependent upon for its definition.
Question for the existence of those variables in the text, around.
Once all these questions have answers, the text can be said to be sufficiently Linguistically complete.
(All these steps are programmable).
Note : 1) This obviously doesn’t include the reasons for the phenomena in the text since that is an infinite phenomenon.
2) If there is the word ‘ball’, we aren’t looking for information like the radius of the ball or the colour of the ball, around, in the text. Those aren’t Linguistic information. We aren’t including logically, mathematically etc. dependent pieces of information (variables) around in the text, in this ‘completeness’.