Analysts can sometimes create huge problems if they try to produce single definitions for terms within a single semantic community and within a single universe of discourse. This may sound crazy, so let me start by giving an example.
Not long ago I was in a discussion concerning data quality. The group leaders decided there was a need to discuss the so-called "dimensions of data quality", e.g. Accuracy, Consistency, Timeliness, and so on. We started with Consistency. Each individual in the group offered their view of what Data Consistency was. Several different definitions were offered. Eventually, the group took a vote and decided which definition of Data Consistency they preferred. The alternative definitions were not discussed further, nor recorded. The individuals who had proposed the unaccepted definitions felt slighted, perhaps even hurt. And they had a right to - as far as I could tell, the alternative definitions represented valid concepts.
What a broken process! Definitions of valid concepts were simply rejected, and lost. Individuals were turned off from definitional work, maybe permanently. Why did it happen? I think I can offer a hypothesis.
The first mistake is to believe in that every concept that is known of is represented by a term in language. Unknown concepts will obviously not be so represented. But what is "unknown" in a semantic community? Is it any concept not known by everyone in the community? What about a concept only understood by a minority in the community?
Secondly, in technical language, there seems to be more expected of technical terms than is warranted. They sound "scientific". They sound as if we should expect them to convey something precise - a trick taken advantage of by thousands of misleading advertisements every day. But there is no reason to expect a technical term to have an agreed definition by everyone in a semantic community. There may be several valid concepts competing to be signified by the term.
The expectation - in technical areas - that terms mirror known reality should not be relied on. The phrase "language as a mirror of reality" is connected with Wittgenstein (see http://www.percepp.com/lacus.htm). It should be granted he may not have been talking about terms per se, and granted that probably few analysts are conciously influenced by Wittgenstein. However, the presupposition seems to have got about somehow, and, anyway academics show little interest in how analysts go about their daily work.
Language cannot be assumed to mirror reality in technical areas. Analysts must create governance processes that guide their definitional work so they harvest all valid concepts, and encourage members of semantic communities to contribute. Terms are starting points, not a final list of signs that denote all the individual concepts in a universe of discourse.