competence是什么意思
资质是什么意思
资质是一个汉语词语,拼音是zīzhì。一指人的素质;二指泛指从事某种工作或活动所具备的条件、资格、能力等;三指姿态容貌。企业资质证书简介:企业资质证书实际上就是指企业有能力完成一项工程的证明书。以建筑业企业为例,根据《建筑业企业资质管理规定》(中华人民共和国建设部令第87号),建筑业企业应当按照其拥有的注册资本、净资产、专业技术人员、技术装备和已完成的建筑工程业绩等资质条件申请资质,经审查合格,取得相应等级的资质证书后,方可在其资质等级许可的范围内从事建筑活动。以上内容参考 百度百科-资质
competence是什么意思
competence的意思
competence[英][ˈkɒmpɪtəns][美][ˈkɑmpɪtəns]n.能力; 技能; 相当的资产;
复数:competences
例句:双语英语1.Being an effective manager and leader begins with people's belief in your competence and character.
要成为一名高效的经理人和企业领袖,首先要让人们信任你的能力和性格。
2.But what of mr osborne's competence?
但是奥斯本先生的能力是什么呢?
3.Tolerance, competence and the third thing is empathy.
在宽容、胜任能力之后,第三个必要条件是体察民情。
4.Managerial competence will simply not suffice.
仅靠管理能力显然不足以成事。
5.His genius is in his competence.
他的天才之处在于他的能力。
如果您有什么疑问和不解之处,欢迎追问我!!!
如果您认可我的答案,请采纳。
您的采纳,是我答题的动力,O(∩_∩)O谢谢
帮我找些关于competence and performance的资料
A distinction introduced by Chomsky into linguistic theory but of wider application. Competence refers to a speaker's knowledge of his language as manifest in his ability to produce and to understand a theoretically infinite number of sentences most of which he may have never seen or heard before. Performance refers to the specific utterances, including grammatical mistakes and non-linguistic features like hesitations, accompanying the use of language. The distinction parallels Varela's distinction between organization and structure. The former refers to the relations and interactions specifically excluding reference to the properties of the refi's components, whereas the latter refers to the relations manifest in the concrete realization of such a system in a physical space. Competence like organization describes the potentiality of a system. Performance like structure describes the forms actually realized as a subset of those conceivable.
Summary
The current generation of language processing systems is based on linguistically motivated competence models of natural languages. The problems encountered with these systems suggest the need for performance-models of language processing, which take into account the statistical properties of actual language use. This article describes the overall set-up of such a model. The system I propose employs an annotated corpus; in analysing new input it tries to find the most probable way to reconstruct this input from fragments that are already contained in the corpus. This perspective on language processing also has interesting consequences for linguistic theory; some of these are briefly discussed.
1. Introduction.
The starting point for this article was the question: what significance can language technology have for language theory? The usual answer to this question is, that the application of the methods and insights of theoretical linguistics in working computer programs is a good way to test and refine these theoretical ideas. I agree with this answer, and I will emphatically reiterate it here. But most of this article is devoted to a somewhat more speculative train of thought which shows that language-technological considerations can have important theoretical implications.
My considerations focus on a fundamental problem which is faced by current language-processing systems: the problem of ambiguity. To solve the ambiguity problem it is necessary to put linguistic insights about the structure and meaning of language utterances under a common denominator with statistical data about actual language use. I will sketch a technique which might be able to do this: data-oriented parsing, by means of pattern-matching with an annotated corpus. This parsing technique may be of more than technological interest: it suggests a new and attractive perspective on language and the language faculty.
First a warning. The following discussion concentrates almost exclusively on the problem of syntactic analysis. Of course this is only a sub-problem -- both in language theory and in language technology. But this problem turns out to yield so much food for thought already, that it does not seem useful to complicate the discussion by addressing the integration with phonetics, phonology, morphology, semantics, pragmatics and discourse-processing. How the different kinds of linguistic knowledge in a language-processing system ought to be distributed over the modules of the algorithm, is a question which will be left out of consideration completely.
2. Linguistics and language technology.
To be able to turn linguistics into a hard science, Chomsky [1957] assigned a mathematical correlate to the intuitive idea of a "language". He proposed to identify a language with a set of sentences: with the set of grammatically correct utterance forms that are possible in the language. The goal of descriptive linguistics is then to characterise, for individual languages, the set of grammatical sentences explicitly, by means of a formal grammar. And the goal of explanatory linguistic theories should then be, to determine the universal properties which the grammars of all languages share, and to give a psychological account of these universals.
In this view, linguistic theory is not immediately concerned with describing the actual language use in a language community. Although we may assume that there is a relation between the language users' grammaticality intuitions and their actual language behaviour, we must make a sharp distinction between these; on the one hand the language system may offer possibilities which are rarely or never used; on the other hand the actual language use involves mistakes and sloppinesses which a linguistic theory should not necessarily account for. In Chomsky's terminology: linguistics is concerned with the linguistic competence rather than the actual performance of the language user. Or, in the words of Saussure, who had emphasized this distinction before: with langue rather than parole.
Chomsky's work has constituted the methodological paradigm for almost all linguistic theory of the last few decades. This comprises not only the research tradition that is explicitly aiming at working out Chomsky's syntactic insights. The perspective summarized above has also determined the goals and methods of the most important alternative approaches to syntax, and of the semantic research traditions which have grown out of Richard Montague's work. Now we may ask: how does language technology relate to this language-theoretical paradigm?
Relatively few language technologists invoke Chomsky's ideas explicitly; but their methodological aassumptions tend to be implicitly based on his paradigm. Of course there are also important differences between the theoretically oriented and the technologically oriented language research. Compared to theoretical linguistics, language-technological research has usually been more descriptive, and less concerned with the universal validity and the explanatory power of the theory. In developing a translation system or a natural-language database-interface, the descriptive adequacy of the grammar of the input language has obviously a higher priority than gaining insights about syntactic universals. Equally evident is the observation that the syntactic and semantic rules developed for a language-technological application must be articulated in a strictly formal way, whereas the results of theoretical research may often take the form of essayistic reflections on different variations of an informally presented idea.
We thus see a complementary relation between theoretical linguistics and language technology; the theory is concerned, often in an informal way, with the general structure of linguistic competence and Universal Grammar; in language technology one tries to specify, in complete formal detail, descriptively adequate grammars of individual languages. Therefore, language-technological work will eventually be of considerable theoretical importance: the theoretical speculations about the structure of linguistic competence can only be validated if they give rise to a formal framework which allows for the specification of descriptively adequate grammars. Because theoretical linguists do not seem particularly interested in this boundary condition of their work, the application-oriented grammar-development activities constitute a useful and necessary complement to theoretical linguistic research.
Language-technological work has shown in the meantime that for the development of theoretically interesting grammars, computational support is indispensible. Formal grammars which describe some non-trivial phenomena in a partially correct way tend to get extremely complex -- so complex, that it is difficult to imagine how they could be tested, maintained and extended without computational tools.
There is another reason why language technology is interesting for linguistic theory: language-technological applications involve systems which are intended to work with some form of "real language" as input. Implementing a competence grammar will therefore be not enough in the end: one also needs software which deals with possibly relevant performance phenomena, and this software must interface in an adequate way with the competence grammar. The possibility of complementing a competence-grammar with an account of performance phenomena is another boundary condition of current linguistic theory which does not receive a lot of attention in theoretical research. Language-technological research may also be of theoretical importance here.
There are thus many opportunities for interesting interactions between language theory and language technology; but until recently such interactions did not often occur. For a long time, language technology has developed in relative isolation from theoretical linguistics. This isolation came about because Chomsky's formulation of his syntactic insights crucially used the notion of a "transformation" -- and many found this a computationally unattractive notion, especially for analysis-algorithms. Computational linguists felt they needed to develop alternative methods for language description which were more directly coupled to locally observable properties of surface structure, and therefore more easy to implement; this gave rise to Augmented Transition Networks and enriched contextfree grammars. After the heydays of Transformational Grammar were over, there has been a remarkable rapprochement between language theory and language technology, because enriched contextfree grammars, which are considered computationally attractive, acquired theoretical respectability. Gazdar, Pullum and Sag created a breakthrough in this area with their Generalized Phrase Structure Grammar.
For enriched contextfree grammars, effective parsing algorithms have been developed. There are procedures which establish the grammaticality of an arbitrary input-sentence in a reasonably efficient way, and determine its structural description(s) as defined by the grammar. This has made it possible to implement interesting prototype-systems which analyse their input in accordance with such a grammar. The results of this approach have been encouraging. They were certainly better than the results of competing approaches from Artificial Intelligence which worked without formal syntax (such as the prototypical versions of "frame-based parsing" and "neural networks"). Nevertheless the practical application of linguistic grammars in language processing systems is not without problems. These we consider in the next section.
3. Limitations of current language processing systems.
The applicability of currently existing linguistic technology depends of course on the availability of descriptively adequate grammars for substantial fragments of natural languages. But writing a system of rules which provides a good characterization of the grammatical structures of a natural language turns out to be surprisingly difficult. There is no formal grammar yet which correctly describes the richness of a natural language -- not even a formal grammar which adequately covers a non-trivial corpus of a substantial size. The problem is not only that the grammar of natural language is large and complex, and that we therefore still need hard work and deep thought to describe it. The process of developing a formal grammar for a particular natural language is especially disappointing because it becomes increasingly difficult and laborious as the grammar gets larger. The larger the number of phenomena that are already partially accounted for, the larger the number of interactions that must be inspected when one tries to introduce an account of new phenomena.
A second problem with the current syntax/parsing-paradigm is even easier to notice: the problem of ambiguity. It turns out that as soon as a grammar characterises a non-trivial part of natural language, almost every input-sentence with a certain length has many (often very many) different structural analyses (and corresponding semantic interpretations). This is problematic because usually most of these interpretations are not perceived as possible by a human language user, although there is no reason to exclude them on formal syntactic or semantic grounds. Often it is only a matter of relative implausibility: the only reason why the language user does not become aware of a particular interpretation of a sentence, is that another interpretation is more plausible.
The two problems I mentioned are not independent of each other. Because of the first problem (the disquieting combinatorics of interacting syntactic phenomena), we might be inclined to stop refining the syntactic subcategories at a certain point, thus ending up with a more "tolerant" grammar which accepts various less happy constructions as nevertheless grammatical. This is a possible strategy, because the Chomskyan paradigm does not clearly fix how language competence is to be delimited with respect to language performance. Not all judgments of sentences as "strange", "unusual", "infelicitous", "incorrect", or "uninterpretable" need to be viewed as negative grammaticality-judgments; ultimately the elegance of the resulting theory determines whether certain unwellformedness-judgments are to be explained by the competence-grammar or by the performance-module. But the designer of a language-processing system who relaxes the system's grammar is not finished by doing that: he is confronted with an increased ambiguity in the grammatical analysis process, and must design a performance-module which can make a sensible selection from the set of alternative analyses.
4. Competence and Performance.
The limitations of current language processing systems are not surprising: they follow immediately from the fact that these systems are built on a competence-grammar in the Chomskyan sense. As mentioned above, Chomsky made an emphatic distinction between the "competence" of a language user and the "performance" of this language user. The competence consists in the knowledge of language which the language user in principle has; the performance is the result of the psychological process that employs this knowledge (in producing or in interpreting language utterances).
The formal grammars that theoretical linguistics is concerned with, aim at characterising the competence of the language user. But the preferences that language users display in dealing with syntactically ambiguous sentences constitute a prototypical example of a phenomenon that in the Chomskyan view belongs to the realm of performance.
The ambiguity-problem discussed above follows from an intrinsic limitation of linguistic competence-grammars: such grammars define the sentences of a language and the corresponding structural analyses, but they do not specify a probability ordering or any other ranking between the different sentences or between the different analyses of one sentence. This limitation is even more serious when a grammar is used for processing input which frequently contains mistakes. Such a situation occurs in processing spoken language. The output of a speech recognition system is always very imperfect, because such a system often only makes guesses about the identity of its input-words. In this situation the parsing mechanism has an additional task, which it doesn't have in dealing with correctly typed alpha-numeric input. The speech recognition module may discern several alternative word sequences in the input signal; only one of these is correct, and the parsing-module must employ its syntactic information to arrive at an optimal decision about the nature of the input. A simple yes/no judgment about the grammaticality of a word sequence is insufficient for this purpose: many word sequences are strictly speaking grammatical but very implausible; and the number of word sequences of this kind gets larger when a grammar accounts for a larger number of phenomena.
To construct effective language processing systems, we must therefore implement performance-grammars rather than competence-grammars. These performance-grammars must not only contain information about the structural possibilities of the general language system, but also about "accidental" details of the actual language use in a language community, which determine the language experiences of an individual, and thereby influence what kind of utterances this individual expects to encounter, and what structures and meanings these utterances are expected to have.
The linguistic perspective on performance involves the implicit assumption that language behaviour can be accounted for by a system that comprises a competence-grammar as an identifiable sub-component. But because of the ambiguity problem this assumption is computationally unattractive: if we would find criteria to prefer certain syntactic analyses above others, the efficiency of the whole process might benefit if these criteria were applied in an early stage, integrated with the strictly syntactic rules. This would amount to an integrated implementation of competence- and performance-notions.
But we can also go one step further, and fundamentally question the customary concept of a competence-grammar. We can try to account for language-performance without invoking an explicit competence-grammar. (This would mean that grammaticality-judgments are to be accounted for as performance phenomena which do not have a different cognitive status than other performance phenomena.) This is the idea that I want to work out somewhat more concretely now. Later (in section 7) I will return to the possible theoretical merits of this point of view.
5. Statistics.
There is an alternative language description tradition which has always focussed on the concrete details of actual language use, often without paying much attention to the abstract language system: the statistical tradition. In this approach the characterisation of syntactic structures is often completely ignored; one only describes "superficial" statistical properties of representative language corpus that is as large as possible. Usually one simply indicates the occurrence frequencies of different words, the probability that a specific word is followed by another specific word, the probability that a specific sequence of 2 words is followed by a specific word, etc. (nth order Markov chains). See, for instance, Bahl et al. (1983), Jelinek (1986).
The Markov-approach has been very succesful for the purpose of selecting the most probable sentence from the set of possible outputs generated by a speech recognition component. It is clear, however, that for various other purposes this approach is completely insufficient, beacuse it does not employ a notion of syntactic structure. For a natural-language database-interface, for instance, semantic interpretation rules must be applied, on the basis of a structural analysis of the input. And there are also statistically significant regularities in corpus-sentences which encompass long word sequences through syntactic structures; in the Markov approach these are ignored. The challenge is now, to develop a method for language description and parsing which does justice to the statistical as well as the structural aspects of language.
The idea that a synthesis between the syntactical and the statistical approaches would be useful and interesting has been broached incidentally before, but so far it has not been thought through very well. The only existing technical instantiation, the concept of a stochastic grammar, is rather simplistic. Such a grammar just juxtaposes the most basic syntactic notion with the most basic probabilistic notion: an "old-fashioned" contextfree grammar describes syntactic structures by means of a system of rewrit