Reconfigurable computing for speech recognition Preliminary findings

Reconfigurable computing for speech recognition Preliminary findings
Reconfigurable computing for speech recognition Preliminary findings

Reconfigurable Computing for Speech Recognition:

Preliminary Findings

S.J. Melnikoff, P.B. James-Roxby, S.F. Quigley & M.J. Russell

Presented at:

10th International Conference on Field Programmable Logic and Applications

(FPL 2000)

Villach, Austria, 28th-30th August 2000

Published in:

Lecture Notes in Computer Science #1896, pp.495-504

Springer-Verlag

? Springer-Verlag

The original publication is available at https://www.360docs.net/doc/1c8137226.html,

https://www.360docs.net/doc/1c8137226.html,/content/39k67e1w85n7l8py

Reconfigurable Computing for Speech Recognition:

Preliminary Findings*

S.J. Melnikoff1, P.B. James-Roxby2, S.F. Quigley1 & M.J. Russell1

1School of Electronic and Electrical Engineering, University of Birmingham, Edgbaston,

Birmingham, B15 2TT, United Kingdom

S.J.Melnikoff@https://www.360docs.net/doc/1c8137226.html,, S.F.Quigley@https://www.360docs.net/doc/1c8137226.html,,

M.J.Russell@https://www.360docs.net/doc/1c8137226.html,

2Xilinx, Inc., 2300 55th Street, Boulder, Colorado 80301, USA

Phil.James-Roxby@https://www.360docs.net/doc/1c8137226.html,

Abstract. Continuous real-time speech recognition is a highly computationally-

demanding task, but one which can take good advantage of a parallel processing

system. To this end, we describe proposals for, and preliminary findings of,

research in implementing in programmable logic the decoder part of a speech

recognition system. Recognition via Viterbi decoding of Hidden Markov Models

is outlined, along with details of current implementations, which aim to exploit

properties of the algorithm that could make it well-suited for devices such as

FPGAs. The question of how to deal with limited resources, by reconfiguration

or otherwise, is also addressed.

1 Introduction

Techniques for performing speech recognition have existed since the late 1960s, and since then, these have been implemented in both hardware and software.

A typical speech recognition system begins with a signal processing stage which converts the speech waveform into a sequence of acoustic feature vectors, or “observations”. That data is then passed through a decoder which computes the sequence of words or phones (sub-word units, e.g. vowels and consonants) which is most likely to have given rise to the data. Higher-level information about context and grammar can be used to aid the process.

What is being proposed here is an implementation of the decoder. This is highly computationally demanding, but has the advantage that it is also highly parallelisable, and hence an ideal candidate for application in programmable logic.

With such devices - FPGAs in particular - becoming available which can utilise an increasing number of processing resources at ever faster speeds, this paper describes preliminary findings in research aimed at implementing speech recognition on a * This research is funded by the UK Engineering and Physical Sciences Research Council

programmable logic device, while also looking at how to deal with the eventuality that the device used does not have sufficient logic resources to perform all of the necessary calculations at each step, by use of run-time reconfiguration or otherwise.It is envisaged that such a decoder would act as a coprocessor in a larger system.This is advantageous both because it would free up resources for the rest of the system, and also because a dedicated speech processor is likely to be able to perform recognition faster than a general-purpose device.

This last point is particularly relevant here. At the time of writing, reconfigurable computing is still in its infancy, and so one of the aims of this research is to justify the use of this technology over conventional approaches (e.g. software implementations,ASICs, use of RAM, etc.).

The paper is organised as follows. Section 2 describes the theory of speech recognition based on the Hidden Markov Model. Section 3 then outlines some previous parallel implementations of this model, and describes a proposed structure for this implementation. This leads on to section 4, which discusses the findings of the current implementation, and looks at the problem of resource shortage. Section 5describes the structure of the whole recognition system, and also looks at system performance. Section 6 then summarises the conclusions drawn so far, and section 7outlines some of the tasks that will be carried out as the research continues.2 Speech Recognition

The most widespread and successful approach to speech recognition is based on the Hidden Markov Model (HMM) [2], [8], [11], and is a probabilistic process which models spoken utterances as the outputs of finite state machines (FSMs).

2.1 The Speech Recognition Problem

The underlying problem is as follows. Given an observation sequence O O O O T =12,..., where each O t is data representing speech which has been sampled at fixed intervals, and a number of potential models M , each of which is a representation of a particular spoken utterance (e.g. word or sub-word unit), we would like to find the model M which best describes the observation sequence, in the sense that the probability P (M |O ) is maximised (i.e. the probability that M is the best model given O ).

This value cannot be found directly, but can be computed via Bayes’ Theorem [11]by maximising P (O |M ). The resulting recognised utterance is the one represented by the model that is most likely to have produced O . The models themselves are based on HMMs.

2.2 The Hidden Markov Model

An N -state Markov Model is completely defined by a set of N states forming a finite state machine, and an N × N stochastic matrix defining transitions between states,whose elements a ij = P (state j at time t | state i at time t -1); these are the transition probabilities .

With a Hidden Markov Model, each state additionally has associated with it a probability density function b j (O t ) which determines the probability that state j emits a particular observation O t at time t (the model is “hidden” because any state could have emitted the current observation). The p.d.f. can be continuous or discrete; accordingly the pre-processed speech data can be a multi-dimensional vector or a single quantised value. b j (O t ) is known as the observation probability .

Such a model can only generate an observation sequence O O O O T =12,...via a state sequence of length T , as a state only emits one observation at each time t . The set of all such state sequences can be represented as routes through the state-time trellis shown in Fig. 1. The (j ,t )th node (a state within the trellis) corresponds to the hypothesis that observation O t was generated by state j . Two nodes (i,t -1) and (j ,t ) are connected if and only if a ij > 0.

As described above, we compute P (M |O ) by first computing P (O |M ). Given a state sequence Q q q q T =12,..., where the state at time t is q t , the joint probability, given a model M , of state sequence Q and observation sequence O is given by:

P O Q M b O a b O q

q q t t T

t t t (,|)()(),=?=∏1121(1)

assuming the HMM is in state 1 at time t = 1. P (O |M ) is then the sum of all possible routes through the trellis, i.e.

P O M P O Q M Q (|)(,|).=

∑all (2)

Fig. 1. Hidden Markov Model, showing the finite state machine for the HMM (l eft ), the observation sequence (top ), and all the possible routes through the trellis (arrowed lines )

2.3 Viterbi Decoding

In practice, the probability P (O |M ) is approximated by the probability associated with the state sequence which maximises P (O ,Q |M ). This probability is computed efficiently using Viterbi decoding.

Firstly, we define the value δt (j ), which is the maximum probability that the HMM is in state j at time t . It is equal to the probability of the most likely partial state sequence q q q t 12,..., which emits observation sequence O O O O t =12,..., and which ends in state j :

δt q q q t t t j P q q q q j O O O M t ()max (,...;;,...|).,...==121212(3)

It follows from equations (1) and (3) that the value of δt (j ) can be computed recursively as follows:

δδt i N t ij j t j i a b O ()max[()](),=?≤≤?11(4)

where i is the previous state (i.e. at time t -1).

This value determines the most likely predecessor state ψt (j ), for the current state j at time t , given by:

ψδt i N t ij j i a ()arg max[()].

=≤≤?11(5)

Each utterance has an HMM representing it, and so the most likely state sequence not only describes the most likely route through a particular HMM, but by concatenation provides the most likely sequence of HMMs, and hence the most likely sequence of phones uttered.

3 Parallel Implementation

3.1 Previous Implementations

Parallel implementations of speech recognition systems have been produced before,most using HMMs. In contrast to the approach described here, previous implementations have generally used multiple processing elements (PEs) of varying sophistication, either at the board or ASIC level, rather than a programmable logic device.

Typically, the recognition problem has been broken down with each PE dealing with one HMM node. For example, [4] has an array of PEs that mirrors the structure of the trellis. One issue that has arisen with some previous parallel implementations is the problem of balancing the workload among a limited number of PEs, which results in a speedup that is less than linear. Steps can be taken to avoid redundant calculations (e.g. “pruning” paths whose probabilities fall below a threshold [9]), but this is more

difficult on parallel architectures than on serial ones [4]. Other approaches to parallel implementation include processor farms to automatically balance the load [1], [10], a more coarse-grained distributed computing model [3], [10], a tree-based architecture

[9], or custom ICs with fine-grained PEs [7].

By using programmable logic, not only do we effectively have as many PEs as we want, but each PE can be optimised to handle the calculations for a single node. In addition, devices with (global) on-chip RAM are particularly suitable, as a buffer is needed to store the best predecessor states at each stage, for the purposes of backtracking.

Hence programmable logic, having not been applied to speech recognition in this way, has properties that may give it an edge over previous parallel implementations.

3.2 Proposed Structure

As described above, in order to perform Viterbi decoding, the trellis must be traversed forwards to find the best path, then once the observation sequence has ended,backtracking takes place, during which the best path is traced in reverse in order to extract the state sequence taken.

Forward Computation. Each node in the trellis must evaluate equations (4) and (5).This consists of multiplying each predecessor node’s probability δt -1(i ) by the transition probability a ij , and comparing all of these values. The most likely is multiplied by the observation probability b j (O t ) to produce the result.

After a number of stages of multiplying probabilities in this way, the result is likely to be very small. In addition, without some scaling method, it demands a large dynamic range of floating point numbers, and implementing floating point multiplication requires more resources than for fixed point.

A convenient alternative is therefore to perform all calculations in the log domain.This converts all multiplications to additions, and narrows the dynamic range, thereby reducing all the arithmetic to (ideally) fixed point additions and comparisons, without in any way affecting the validity of the results obtained. Hence equation (4) becomes

δδt i N t ij j t j i a b O ()max[()log ]log[()].=++≤≤?11(6)

The result of these changes mean that a node can have the structure shown in Fig.

2. The figure highlights the fact that each node is dependent only on the outputs of nodes at time t ?1, hence all nodes in all HMMs at time t can perform their calculations in parallel.

The way in which this can be implemented is to deal with an entire column of nodes of the trellis in parallel. As the speech data comes in as a stream, we can only deal with one observation vector at a time, and so we only need to implement one column of the trellis. The new data values (observation vector O t and maximal path probabilities δt -1(j )) pass through the column, and the resulting δt values are latched, ready to be used as the new inputs to the column when the next observation data appears.

Backtracking. Each node outputs its most likely predecessor state ψt (j), which is stored in a sequential buffer external to the nodes. When the current observation sequence reaches its end at time T , a sequencer module reads the most likely final state from the buffer, chosen according to the highest value of δT (j ). It then uses this as a pointer to the collection of penultimate states to find the most likely state at time T -1,and continues with backtracking in this way until the start of the buffer is reached. In the event that the backtracking buffer is filled before the observation sequence ends,techniques exist for finding the maximal or near-maximal path.

As the resulting state sequence will be produced in reverse, it is stored in a sequencer until the backtracking is complete, before being output. This state sequence reveals which HMMs have been traversed, and hence which words or sub-word units have been uttered. This information can then be passed to software which assembles the utterances back into words and sentences.

The structure of the decoder is shown in Fig. 3. Note that there will be additional logic in order to initialise the system, and to scale the values of δt in order to prevent overflow.

Fig. 2. Block diagram of a node representing state j in a 4-state finite state machine, with all calculations performed in the log domain. δt -1(i) are the outputs of previous nodes; O t is the current observation vector. The transition probabilities a ij and the observation probability distribution b j (O t ) are fixed for a specific node Fig. 3. Decoder structure. The nodes’ outputs from time t ?1 are supplied as their inputs at time t , along with the new observation vector O t . The most likely predecessors of each state ψt (j ) are stored in the backtrack buffer until the speech data ends, then sent to the sequencer which traces the most likely path in reverse, before outputting it in the correct order

δt-1(1)

δt-1(2)

δt-1(3)

δt-1(4)

4j

t (j)t (j)sequence

4 Implementation in Programmable Logic

4.1 Discrete HMM-Based Implementation

Parameters for a discrete HMM-based system have been obtained using speech recognition tools and real speech waveforms. In this case, each speech observation takes the form of an 8-bit value, corresponding to the address in a 256-entry table which describes the observation probability distribution b j(O t) as a set of 15-bit vectors. Each node in the state-time trellis has a table associated with it. The transition probabilities are encoded in the same format.

We are using 49 HMMs, each one representing a monophone, i.e. an individual English vowel or consonant. Each HMM has 3 emitting states, so there are 3×49=147 nodes. Treating the speech observation data as an address, we need to obtain the value at this address for each node. If we do all this for each node in parallel, we need a 2205-bit-wide data bus.

The choice is therefore whether to store this data in on-chip RAM, use the LUTs as distributed RAM/ROM, or to store it off-chip.

Off-Chip RAM. If we were to store the data off-chip, we would have to perform several reads per observation vector, as the kind of data width required would not be realisable. In addition, off-chip RAM is likely to be slower than any on-chip alternative. However, dedicated RAM can provide more data storage than is available on an FPGA, which becomes particularly relevant as the recognition system is made more complex.

On-Chip RAM. On-chip RAM can offer increased speed, and very high data width. A trellis column containing 11 of the 49 HMMs has been implemented on a Xilinx Virtex XCV1000. It requires 31 of the 32 Block RAMs on the FPGA, plus around 4000 of the 24,500 LUTs (16%) for the addition and compare logic, and allows all 33 nodes to obtain their observation probabilities in parallel at an estimated clock frequency of 50MHz.

From these figures, we can predict that an XCV1000 could store around 70 HMMs. If more were required, we would have to use reconfiguration or off-chip RAM, and split the HMM column up, processing it in sections. At 50MHz, even allowing for a deep pipeline and access or reconfiguration delays, this would permit of the order of thousands of HMMs to be handled within the 10ms allowed for real-time speech processing.

While this gives a useful prediction of resource usage, clearly a larger FPGA is required for a full implementation, not least because the above figures do not include the resources needed for the backtracking buffer (which is likely to require on-board RAM as well), scaler, and other control logic.

Distributed RAM/ROM. LUTs can typically be configured as distributed RAM or ROM. While using these for storing the observation probabilities is likely to result in faster accesses than for Block RAM, it is at the expense of resources that need to be used for other parts of the system - whereas using Block RAM does not incur this penalty.

4.2 Reconfiguration vs. RAM [5]

A larger device may alleviate some of these problems, but speech recognition systems can easily be made more complex and so eat into any spare resources (explicit duration modelling [4], [6], [8] is one such resource-hungry improvement). It is therefore necessary to consider early on which of the above options should be used.

One possible solution is to use distributed ROM with run-time reconfiguration (RTR). Focussing on the observation probabilities, the only parts of the FPGA that need to be reconfigured are some of the LUTs; the associated control logic is identical for each node, and does not need to be changed (the transition probabilities would be different, but require significantly less data, and so could, in theory, be stored on-chip). In addition, the system control logic can remained fixed.

Given this situation, an FPGA which is too small to store all the required data could perhaps be repeatedly reconfigured at run-time by overlaying the LUTs holding the probabilities, so that each new observation vector is passed through all the HMMs in the system.

If on-chip RAM is too small, the only alternative is to perform a number of reads from off-chip RAM. A key deciding factor in whether to do this rather than RTR is the speed with which each can be performed. For RAM in particular, we are limited by the width of the data bus, which will obviously determine how many reads we need to do for each speech observation.

It remains to be seen which method will be the more suitable for this application.

5 Speech Recognition Systems

5.1 System Structure

At present, we are using an XCV1000-6 BG560, which resides on an ESL RC1000-PP prototyping board. The board is a PCI card which sits inside a host PC.

Recorded speech waveforms are pre-processed in software, and stored as quantised data. Once the initial system is completed, the speech data will be sent to the FPGA via the PCI bus; the FPGA will then perform the decoding and output a state sequence to the PC, which will map it back into phones.

In other words, the FPGA will be acting as a coprocessor, dealing with the most computationally-demanding part of the recognition process, thereby reducing the load on the PC’s processor.

5.2 P erformance

A significant and unique property of speech is that for the purposes of recognition, we can sample it at a mere 100Hz, giving us 10ms to process each piece of data -assuming that we are always aiming to do recognition in real-time. This means that whether we use RTR or off-chip RAM, there is a large period available to perform these operations, which provides a lot of flexibility when it comes to making the decoding algorithm more complex.

At this data rate, the pre-processing can be done in software in real-time. While this does not rule out eventually doing this on the FPGA as well, for the time being the FPGA will be used purely for the decoding.

6 Conclusion

S o far, we have investigated Hidden Markov Model Viterbi decoding as a method of speech recognition, and have broken down the process in such as a way so as to take advantage of a parallel computing architecture.

B ased on this analysis, we have begun work on an implementation of a real-time monophone speech decoder in programmable logic, which is expected to fit comfortably within an XCV1000, while utilising off-chip RAM. We believe that even if future FPGAs (or other similar devices) have the capacity to deal with a basic HMM-based algorithm, improvements can be made which, while making recognition more effective, would require off-chip RAM access or run-time reconfiguration in order to deal with the increase in processing resources needed.

7 Future Work

T his research is clearly at an early stage, and so there are a number of issues which still need to be dealt with, including:

? Integrate the recogniser into a complete system: merely synthesising a recogniser will provide useful information on resource usage, but we need to able to test it!

This will require using the FPGA prototyping board mentioned above, and writing software for the PC that houses it. It is envisaged that only recorded speech will be used for the time being. The results will then be tested against the output of an already completed software version of the recogniser.

? Improve the recognition system: once a monophone recogniser is completed, the next logical step is to move on to a bigram- and trigram-based system (pairs and triples of monophones). Whereas a monophone recogniser requires 49 HMMs, a bigram/trigram version uses around 500-600, another reason why being able to cope with limited resources is very important for this application.

? Use of semi-continuous and continuous HMMs: FPGAs are particularly well suited to dealing with discrete (quantised) speech data. However, use of continuous data

has been shown to produce better results in terms of recognition accuracy.

Implementing this requires computing sums of Normal distributions (Gaussian mixtures) on an FPGA.

References

1. Alexandres, S., Moran, J., Carazo, J. & Santos, A., “Parallel architecture for real-time speech

recognition in Spanish,” Proc. IEEE International Conference On Acoustics, Speech And Signal Processing (ICASSP ’90), 1990, pp.977-980.

2. Cox, S.J., “Hidden Markov models for automatic speech recognition: theory and

application,” British Telecom Technology Journal, 6, No.2, 1988, pp.105-115.

3. Kimball, O., Cosell, L., Schwarz, R. & Krasner, M., “Efficient implementation of continuous

speech recognition on a large scale parallel processor,” Proc. IEEE International Conference On Acoustics, Speech And Signal Processing (ICASSP ’87), 1987, pp.852-855.

4. Mitchell, C.D., Harper, M.P., Jamieson, L.H. & Helzerman, R.A., “A parallel

implementation of a hidden Markov model with duration modeling for speech recognition,”

Digital Signal Processing, 5, No.1, 1995, pp.43-57 and https://www.360docs.net/doc/1c8137226.html,/~speechg/

5. James-Roxby, P.B. & Blodget, B., “Adapting constant multipliers in a neural network

implementation,” to appear in: Proc. IEEE Symposium on FPGAs for Custom Computing Machines (FCCM 2000), 2000.

6. Levinson, S.E., “Continuously variable duration hidden Markov models for automatic

speech recognition,” Computer Speech and Language, 1986, 1, pp.29-45.

7. Murveit, H. et al, “Large-vocabulary real-time continuous-speech recognition system,” Proc.

IEEE International Conference On Acoustics, Speech And Signal Processing (ICASSP ’89), 1989, pp.789-792.

8. Rabiner, L.R., “A tutorial on hidden Markov models and selected applications in speech

recognition,” Proceedings of the IEEE, 77, No.2, 1989, pp.257-286 and Waible, A. & Lee, K.F. (eds.), Readings in Speech Recognition, 1990, Morgan Kaufmann Publishers, Inc., pp.267-296.

9. Roe, D.B., Gorin, A.L. & Ramesh, P. “Incorporating syntax into the level-building algorithm

on a tree-structured parallel computer,” Proc. IEEE International Conference On Acoustics, Speech And Signal Processing (ICASSP ’89), 1989, pp.778-781.

10. S utherland, A.M., Campbell, M., Ariki, Y. & Jack, M.A., “OSPREY: a transputer based

continuous speech recognition system,” Proc. IEEE International Conference On Acoustics, Speech And Signal Processing (ICASSP ’90), 1990, pp.949-952.

11. Y oung, S., “A review of large-vocabulary continuous-speech recognition,” IEEE Signal

Processing Magazine, 13, No.5, 1996, pp.45-57.

我和我的祖国演讲比赛优秀作品:辉煌70年中国铁路华丽巨变

我和我的祖国演讲比赛优秀作品: 辉煌70年中国铁路华丽巨变 今年是中华人民共和国成立70周年。70年砥砺奋进,70年风雨兼程,70年波澜壮阔,70年来新中国发生了翻天覆地的变化,无论是在中华民族历史上,还是在世界历史上,这都是一部 感天动地的奋斗史诗。 一滴水可以折射太阳的光辉。70年,光辉岁月弹指挥间;70年,中华大地沧桑巨变,作为“国民经济发展的动脉”,中国铁 路由小到大、由弱变强,从内燃机车到电力机车,再到高铁动车,实现跨越式发展的中国铁路无疑是70年伟大成就的一个具体缩影。 数据说话,事实印证。1949年,彼时我国铁路营业里程只有2.18万公里,其中能够维持通车的仅有 1.1万公里,且路网稀疏、线单站散。历经70年矢志发展,中国铁路营业里程2018年已经增长到13.1万公里以上,“四纵四横”高铁网提前建成运营,从冰雪北国到彩云之南,从渤海之滨到世界屋脊,一条条钢 铁巨龙驰骋在中华大地,一声声汽笛轰鸣响彻神州九霄。尤其是第六次提速之后,动车和高铁登场,中国人的出行日益便利。目

前上线运行动车组超过3300组,其中“复兴号”动车组超过400组。 70年,用滚滚车轮印证出民生向好的“足下变迁”,塑造出 一座座崛起的现代化大都市和川流不息的交通枢纽。特别是从 2008年中国进入高铁时代到现在,十年间中国高铁发展迅速, 被称为中国“新四大发明”之一,从冰雪覆盖的高寒地带到热带 海岛,中国高铁的高颜值、高速度、高技术,以及近乎完美的乘 车体验成功“圈粉”海内外。资料显示,到2018年年底,中国高铁营业里程达到 2.9万公里,超世界高铁总里程2/3,商业运营速度最高、运营场景最为丰富。目前,中国高铁已经拿下多个 世界之“最”——建成和在建里程最长、运营速度最快、运载人 数最多、动车组谱系最全等等,实现了由“追赶者”到“引领者”的角色转换。 “复兴号”成为中国铁路技术水平的集中体现,也成为一张 走向世界的中国名片。2017年,中国标准动车组“复兴号”正 式开通运行,并实现了350公里的全球商业运营高铁最高时速。在“复兴号”上,高速动车组254项重要标准当中,中国标准占84%。整体设计和关键技术都是中国自主研发的,具有完全自主 知识产权。“复兴号”的出现,让中国高铁从国产化迈向了自主化,开创了惊艳世界的“中国速度”。由此,我们看到,从追赶 到引领,从中国制造到中国标准,世界铁路越来越重视倾听中国

认知语言学参考书目

认知语言学参考资料 Aitchison, J. 1987. Words in the Mind: An Introduction to the Mental Lexicon. Oxford: Basil Blackwell Ltd. Anderson, J. R. 1983. The Architecture of Cognition.Cambridge, MA: HarvardUniversity Press. Anderson, J. R. 1995. Cognitive Psychology and its Implications.New York: W. H. Feeman and Company. Bach, K. 1987. Thought and Reference. Oxford: OxfordUniversity Press. Barsalou, L. W. 1982. Context-independent information and context-dependent information in concepts.Memory & Cognition 10: 82-93. Barsalou, L. W. 1983. Ad hoc categories.Memory & Cognition 11: 211-227. Barsalou, L. W. 1987. The instability of graded structure: Implications for the nature of concepts. In U. Neisser (ed.), Concepts and Conceptual Development: Ecological and Intellectual Factors in Categorization. New York: CambridgeUniversity Press, 101-140. Barsalou, L. W. & B. H.Ross. 1986. The roles of automatic and strategic processing in sensitivity to superordinate and property frequency. Journal of Experimental Psychology: Learning, Memory and Cognition 12: 116-134. Barsalou, L. W. & J. W.Hutchinson. 1987. Schema-based planning of events in consumer contexts. In P. F. Anderson & M. Wallendorf (eds.), Advances in Consumer Research, V ol. 14. Provo, UT: Association for Consumer Research, 114-118. Blakemore, D. 1987. Semantic Constraints on Relevance.Oxford: Blackwell. Blakemore, D. 2002. Relevance and Linguistic Meaning: The Semantics and Pragmatics of Discourse Markers. Cambridge: CambridgeUniversity Press. Carston, R. 1988. Language and cognition. In F. J. Newmeyer (ed.), Linguistics: The Cambridge Survey, V ol. 3. Cambridge: CambridgeUniversity Press, 38-68. Croft, W. & A. D.Cruse. 2004. Cognitive Linguistics. Cambridge: CambridgeUniversity Press. Fauconnier, G. 1994. Mental Spaces: Aspects of Meaning Construction in Natural Language. New York: CambridgeUniversity Press. Fauconnier, G. 1997.Mappings in Thought and Language.Cambridge: CambridgeUniversityPress.

学生作品-爱国演讲比赛优秀演讲稿

学生作品|爱国演讲比赛优秀演讲稿 丁曼怡同学的演讲 《目光所及:浓浓爱国情》 ;;;在做这个演讲之前,我想问问大家,你们眼中的爱国是长着什么样的面孔?是不吃KFC还是不用圣罗兰,是不追日漫还是不看NBA?或许大家内心对爱国这个词的认识本身就是模糊不清的,每个人都可以高呼:我很爱国。因为说出一个词不需要花费什么力气,谁都可以说,但,谁对这句话真正负责了呢?在这里我想和大家讲一个人,是我的爷爷。因为他是爱国这个词的具体化,是个真正爱国的人。 ;;;我的爷爷叫丁文福,出生于1945年农历4月15日,是个地地道道的于都人,小的时候顺着当兵的热潮,他从了军。 ; 听奶奶说,爷爷不是个机灵的兵,他能立军功受表彰多是因为他老实肯干。我记得我以前问过他:爷爷你去过最远的地方是哪里?他说是南昌,因为在当通讯兵的时候,为了一个指令,他光脚从于都走到了南昌。那时候我笑话他太傻太笨,明明可以坐车去的路程为什么愣是要走路?我并不知道那时候道路有多崎岖和环境有多劣,天真的以为361公里,多走几天就到了。但爷爷说:路是走出来的,再长的路也踩在脚下了。 ; 爷爷这个不机灵的兵、老实肯干的兵,一当就当到了退休。自我记事起,家里书柜的半壁江山是爷爷的,里面摆满了大大小小的章程和理论书。除此之外,爷爷退休后仍有各种各样的会要开,我深刻的记得我去过一次。那是一个不大的会议室,里面仰着许多年老的面孔。可爷爷是那群老人中更为年老的那个,原来在我心中无比高大的他,只是同龄人中那个佝偻着身子,极其平凡的一个老头。老年斑好像驻长在他身上肉眼可见的每个地方,皱纹也那么深,头发稀少的令人难受,但他认真的听着报告而且神采奕奕。我无法想象一个七八十岁的老人仍在生活中拥抱当兵时的自信是什么样子,可当我看着爷爷的时候,我有了答案。我本以为人老了就应该休息,可为什么爷爷在对党对国的事情上,总是有用不完的热情?

-高速我的家主题演讲比赛优秀作品

**高速我的家主题演讲比赛优秀作品 石安高速是我工作生活的地方,也是我梦想开始的地方,三年前,我很荣幸的成为这个大家庭中的一员。我是一名石安高速的收费员,收费发卡则是我的工作。繁星满天我结束一天的工作,晨光熹微我开始新的作业,工作中微笑是我的代名词,“请您拿好通行卡,祝您一路平安”“谢谢,请走好”,话语简短但其中则包含着我深切的感情和祝福。 看似枯燥乏味的工作,但我却在中间体会到无限的乐趣,不同的司乘人员带给我丰富多彩的感受,接过相同的通行卡,显示出不同的入口站,费额点也各不相同,司机师傅的话语也各式各样,有的会向我问路,有的会面露微笑表示对我工作的肯定,也有的人会和我寒暄几句询问工作是否辛苦,还有的司机师傅选择默默离开,不管怎样,这第一次的遇见注定我们就是朋友,他们的满意笑容是我工作最大的欣慰。 石安高速就像我的第二个家,我在这里体会到家的温暖,食堂每天都准时为我们准备好热气腾腾的饭菜,单位准备干净整洁的宿舍为了我们有一个舒适的环境休息,建设多功能健身房让我们在空闲之余强身健体,丰富业余生活。单位为我们所做的一切都是为了让大家能够感受到像家一样的温馨,每天都有愉悦的身心投入的繁忙的工作中去。 “我如果爱你,绝不学痴情的鸟儿,为绿荫重复单调的歌

曲;......我们分担寒潮、风雷、霹雳;我们共享雾霭流岚、虹霓,...坚贞就在这里,不仅爱你伟岸的身躯,也爱你坚持的位置,脚下的土地。”舒婷的《致橡树》中的经典语句正是我对石安高速最深情的表达,我由一开始单纯的工作到现在与这里似乎已经融为一体,他是我生命中最重要的一部分,我把大部分的精力投入到这里。我爱我的工作,我爱这里的人和一草一木,我也更爱石安高速的一切。我们把它当作我们的朋友,家人,生命中不可或缺的重要一员,让我们将青春在这里挥洒,共同创造石安高速下一个二十年的光荣篇章。

部分学校对外汉语考研参考书目汇总

部分学校对外汉语考研参考书目汇总 2008-08-23 16:57 北京师范大学语言学与应用语言学 33 叶蜚声《语言学纲要》、 伍铁平《普通语言学概要》、 黄伯荣《现代汉语》、 胡裕树《现代汉语》、 王力《古代汉语》、 张之强《古代汉语》(北京师范大学出版社)、 游国恩《中国文学史》(人民文学出版社)、 郭志刚《中国现代文学史》(高等教育出版社) 上海师范大学语言学与应用语言学 30 张斌《新编现代汉语》(复旦大学出版社)、 叶蜚声《语言学纲要》 陈昌来、齐沪扬《应用语言学纲要》(复旦大学出版社)(复试:游国恩《中国文学史》、《中国文化要略》) 华东师范大学语言学与应用语言学 29 语言基础+作文 对外汉语教学未定 语言基础+作文 民俗学 5

文学基础+作文 文艺学 3 文学基础+作文 中国古代文学 11 文学基础+作文 汉语言文字学 9 汉语基础+作文 文学基础:《文学理论教程》童庆炳,高等教育出版社(1998); 《中国现代文学三十年》(修订本)钱理群,北京大学出版社(1998);《中国当代文学史教程》陈思和,复旦大学出版社; 《中国文学史》游国恩主编,人民文学出版社; 《中国历代文学作品选》朱东润主编,上海古籍出版社; 《外国文学史》郑克鲁主编,高等教育出版社(2000)。 语言基础:《现代汉语》胡裕树主编,上海教育出版社; 《古代汉语》王力主编,中华书局; 《语言学纲要》叶蜚声、徐通锵,北京大学出版社。 汉语基础:《现代汉语》胡裕树主编,上海教育出版社; 《古代汉语》王力主编,中华书局; 《语言学纲要》叶蜚声、徐通锵,北京大学出版社。 上海外国语大学语言学与应用语言学 20 王德春《语言学概论》(外教社)、 胡裕树《现代汉语》

北京大学语言学参考书目

综合考试语言学书目 I普通语言学 1)Bloomfield,https://www.360docs.net/doc/1c8137226.html,nguage.London:George Allen&Unwin Ltd.(Reprinted in2002by北京:外语教学与研究出版社.) 2)Chomsky,N.2002.On Nature and Language.Cambridge:Cambridge University Press.(Reprinted in2004by北京:北京大学出版社.) 3)Robins,R.H.1989.General Linguistics.4th ed.London:Longman.(Reprinted in2000by北京:外语教学与研究出版社.) 4)Sapir,https://www.360docs.net/doc/1c8137226.html,nguage:An Introduction to the Study of Speech.New York:Harcourt Brace Jovanovich.(Reprinted in2002by北京:外语教学与研究出版社.) 5)Saussure,F.de.1916/1960.Course in General Linguistics.(trans.W.Baskin).London:Peter Owen.(Reprinted in2001by北京:外语教学与研究出版社.) 6)Whorf,https://www.360docs.net/doc/1c8137226.html,nguage,Thought,and Reality:selected writings of Benjamin Lee Whorf. Cambridge,Mass.:M.I.T.Press.(高一虹等中译本2001年长沙:湖南教育出版社.) 7)胡壮麟、姜望琪(主编),2002《语言学高级教程》,北京:北京大学出版社。 II句法学 8)Baltin,M.R.&Collins,C.(eds.)2000.The Handbook of Contemporary Syntactic Theory. Cambridge,Mass.:Blackwell.(Reprinted in2001by北京:外语教学与研究出版社.) 9)Cook,V.J.&Newson,M.1996.Chomsky's Universal Grammar:An Introduction.2nd.ed.Oxford: Blackwell.(Reprinted in2000by北京:外语教学与研究出版社.) 10)Radford,A.1997.Syntactic Theory and the Structure of English:A Minimalist Approach. Cambridge:Cambridge University Press.(Reprinted in2002by北京:北京大学出版社.) 11)van Valin,R.D.&Lapolla,R.J.1997.Syntax:Structure,Meaning,and Function.Cambridge: Cambridge University Press.(Reprinted in2002by北京:北京大学出版社.) 12)温宾利,2002《当代句法学导论》,北京:外语教学与研究出版社。 III语义学

北京语言大学对比语言学与韩国语词汇研究考博参考书目导师笔记重点

北京语言大学对比语言学与韩国语词汇研究考博参考书目导师笔记重点 一、专业的设置及考试科目 二、导师简介 崔顺姬女,朝鲜族。文学博士,外国语学院教授,硕士生导师。现任外国语学院韩语系主任、韩国教育文化研究中心主任、东方学研究中心副主任;国际高丽学会亚洲分会常务理事兼秘书长、中国非通用语教学研究会理事、中国韩国语教育研究会理事、国际韩国语教育学会会员、国际笔会韩国分会会员、中国作家协会延边分会会员。 主要研究领域为韩国语词汇学、语义学、韩中语言文化对比研究。先后编写出版《精选韩汉-汉韩词典》(商务印书馆)、《走遍韩国》系列教材(外研社,一套共12本,副主编)、《跟我说韩语》、《对外汉语教师资格考试大纲-朝鲜语》、《韩语交际口语》、《韩国语201句》、《脱口而出说韩语》,出版专著《韩国语词汇教育研究》、散文集《共饮一杯爱情茶》。在国内外重要学术刊物上发表“词汇研究与韩国语教育”,“韩国语固定词组研究在教学中的运用”,“论中国的韩国语能力考试”,“韩中称呼语的对比研究”,“韩中同素反序词的对比分析”,“女性词汇和女性的社会地位”等论文30多篇,多次参加在韩国、朝鲜、日本、英国举行的学术研讨会,完成韩国学术振兴财团韩国学研究项目。曾获得“首届海外文学奖”,尹东柱文学奖,doraji 文学奖,张洛柱文学奖。 三、参考书目 专业课信息应当包括一下几方面的内容: 第一,关于参考书和资料的使用。这一点考生可以咨询往届的博士学长,也可以和学科、专业名称及研究 方向 指导教师 人数考试科目备注 050210亚非语言文学 203对比语言学与韩国语 词汇研究 崔顺姬 (本年度停招)①1021二外英语或1022二外日语②2038韩国语语言学概论③3085韩国语词汇学

演讲比赛选什么主题好

竭诚为您提供优质文档/双击可除演讲比赛选什么主题好 篇一:企业演讲比赛主题 “岗位·责任”主题演讲赛的筹备方案 一、主题目的 以“岗位〃责任”为主题,通过演讲交流,强化责任心教育,提升员工执行力,大力营 造忠诚企业、爱岗敬业、勇于奉献的浓厚氛围,促进“两个整顿”活动的深化开展,切实增 强广大员工的主人翁意识、工作责任心和企业归属感。 二、时间地点 本次演讲赛拟定于9月29日上午8:50在宾馆五楼会议室举行(如有变动另行通知)。 三、参赛范围 本次演讲比赛以分公司为单位,公司本部以部门为单位参加,由各分公司(部门)自定 一名参赛选手,分公司(部门)负责人带队参赛。 四、内容要求

各分公司(部门)的演讲稿自拟,在演讲稿撰写中可围绕“关爱企业、珍爱岗位、履职 尽责”的主线,宣讲对“岗位〃责任”的认识体会,也可讲述本人或身边他人爱岗敬业、忠 诚奉献的典型事迹;演讲稿字数在1500-2000字之间为宜,力求主题突出、素材丰富、事迹 生动、文笔流畅,有较强的感染力、号召力和宣传引导作用。 五、奖项赛制本次演讲比赛设一等奖1名,二等奖2名,三等奖3名,优秀组织奖1个。比赛不分预、 决赛,由评委现场打分亮分,根据参赛选手得分排名评奖。演讲次序在赛前以抽签方式决定。 六、组织机构 1、评委组:由公司领导班子成员、各分公司主要负责人担任评委,主要负责比赛评分、 监督仲裁等。 2、筹备组:由公司总经理工作部牵头,其他有关部门配合,主要负责赛事筹办、赛场布 置、宣传报道等。 七、其它要求 1、各分公司(部门)要高度重视本次演讲赛,在安排好正常生产经营工作的同时,及时

做好参赛准备,确保活动取得实效。 2、演讲选手男女不限,要求选手普通话标准流利,脱稿演讲,仪表端庄,着装整洁,举 止大方,良有良好的精神风貌,演讲时间控制在5-10分钟。 3、各分公司(部门)务必于9月26日前将演讲稿及参赛选手名单以电子邮件报公司总 经理工作部审查。篇二:主题演讲比赛策划书主题演讲比赛 策 划书 一、活动背景及目的为隆重纪念中国共产党成立90周年,进一步深入开展创先争优活动,根据省委教育工委《关于纪念建党90周年开展“南粤校园党旗红”系列活动的通知》(粤教工委组函〔20XX〕 22号)的要求,仲恺农业工程学院党委学生工作部(处)、团委将举行以“创先争优,争当 先锋”为主题的演讲比赛。藉此增强校园学术文化气氛,推动仲园学子的学习积极性,激发 当代大学生的强烈的社会责任感和民族使命感,以科教兴国为己任,提高自身综合素质,加 强仲园的学风建设。

演讲比赛获奖喜报

昌乐县初中第三届“主题阅读,快乐成长” 读书演讲比赛获奖喜报 我校选手参加昌乐县初中第三届“主题阅读,快乐成长”读书演讲比赛一人喜获一等奖。 三人喜获二等奖。 3月19日,我校七(2)学生王宇婷、七(3)唐琦、八 (2)班学生滕建丽、八(5)刘淑珍在昌乐县初中第三届“主题阅读,快乐成长”读书 演讲比赛的活动中,分别获得一等奖和二等奖的好成绩。整个比赛在全县四十多名选手中展 开了激烈的角逐。最终,王宇婷同学用自己优秀的综合素质,精彩的演讲、饱满的激情打动 了在场的所有评委,以全场最高分获得了胜利。唐琦、滕建丽、刘淑珍也以精彩的演讲荣获 二等奖。 特此报喜! 营丘镇中学篇二:获奖喜报 获奖喜报 在不久前召开的湖北省教育学会第22次学术年会上,我市教育学会工作和教科研工作, 受到大会的广泛关注和赞赏,枝江市顾家店中学、宜昌市教育学会两个单位荣获了湖北省教 育学会第22次学术年会教科研组织奖。(全省只有24个专业委员会或单位获此殊荣) 同时全省累计征集论文2012篇,共评出一等奖175篇,二等奖442篇,三等奖586篇。 我市推选的107篇文章参赛,获得省一等奖11篇、省二等奖59篇、省三等奖33篇,获奖率 为97.2%,远远高于全省平均水平。现将我市获得省级奖的论文名单公布如下。(市级奖将于 春季公布并颁奖) 宜昌市教育学会 2010年1月 湖北省教育学会第22次学术年会获湖北奖论文名单 1 2 篇三:获奖喜报 获奖喜报 我初二年段于11月21日下午举办“感恩你我他”演讲比赛。现将比赛结果公布如下: 一等奖 李茜徐小敏 二等奖 张秋生林婷婷初二(1)班初二(2)班初二(1)班初二(2)班 卓莹莹初二(3)班宋婷婷初二(3)班 三等奖 郑呼初二(1)班 侯鹏鹏初二(3)班 郑俊初二(4)班 陈荣初二(2)班王梦圆初二(4)班吴薇薇初二(4)班 2011年11月22日篇四:演讲比赛喜讯 喜讯:我院王荣辉同学获福州市属高校“畅想中国梦”大学生演讲比赛三等奖 根据《福州市教育局关于公布福州市属高校“畅想中国梦”大学生演讲比赛获奖名单的 通知》(榕教高?2014?2号),由学院选送的作品《青春中国梦》(演讲人:王荣辉,指导老师: 郑宇星)荣获三等奖。篇五:演讲比赛评分规则及设奖情况 演讲比赛评分规则及设奖情况

北京外国语大学2018年认知语言学参考书目

北京外国语大学2018年认知语言学参考书目 认知语言学(英语学院蓝纯教授) 认知语言学 1、Croft,William and Alan Cruse.Cognitive Linguistics.Cambridge:Cambridge University Press,2004. 2、Evans,Vyvyan,Benjamin K.Bergen and J?rg Zinken,eds.The Cognitive Linguistic Reader.London:Equinox Publishing Company,2007. 3、Evans,Vyvyan,and Melanie Green.Cognitive Linguistics:An Introduction.Edinburgh:Edinburgh University Press,2006. 4、Gavins,Joanna and Gerard J.Steen.Cognitive Poetics in Practice.London/New York:Routledge,2003. 5、Lakoff,George,and Mark Johnson.Metaphors We Live By.Chicago:University of Chicago Press,1980. 6、Lakoff,George.Women,Fire,and Dangerous Things:What Categories Reveal About the Mind.Chicago:University of Chicago Press,1987. 7、Langacker,Ronald W.Foundations of Cognitive Grammar:Theoretical Prerequisites.Stanford:Stanford University Press,1987. 8、Langacker,Ronald W.Foundations of Cognitive Grammar:Descriptive Application.Stanford:Stanford University Press,1991. 9、Stockwell,Peter.Cognitive Poetics:An Introduction.London/New York:Routledge,2002. 10、Talmy,Leonard.Toward a Cognitive Semantics.New York:Bradford Books,2003. 11、Taylor,John R.Linguistic Categorization:Prototypes in Linguistic Theory.Beijing:Foreign Language Teaching and Research Press,2001. 普通语言学 12、Austin,John.How to Do Things with Words.Oxford:Clarendon Press,1962. 13、Bloomfield,https://www.360docs.net/doc/1c8137226.html,nguage.Beijing:Foreign Language Teaching and Research Press,2002. 14、Brown,Gillian.Discourse Analysis.Beijing:Foreign Language Teaching and Research Press,2000. 15、Carroll,David.Psychology of Language.Beijing:Foreign Language Teaching and Research Press,2008. 16、Chomsky,Noam.New Horizons in the Study of Language and Mind.Beijing:Foreign Language Teaching and Research Press,2002. 17、Halliday,M.A.K.An Introduction to Functional Grammar.London:Arnold,2004. 18、Halliday,M.A.K.and Ruqaiya Hasan.Cohesion in English.London:Routledge,1976. 19、Hudson,Richard.Sociolinguistics.Cambridge:Cambridge University Press,1996. 20、Levinson,Stephen.Pragmatics.Beijing:Foreign Language Teaching and Research Press,2001.

应用语言学方向参考书目

应用语言学方向参考书目 1.Allright, D. & K. M. Bailey. 1991. Focus on the Language Classroom: An Introduction to Classroom Research for Language Teachers. Cambridge University Press. * 2.Arnold, J. (ed.) 1999/2000. Affect in Language Learning. Cambridge University Press/外语教学与研 究出版社/人民教育出版社。* 3.Bachman, Lyle. F. 1990. Fundamental Considerations in Language Testing. New Y ork: Oxford University Press. 4.Beckett, G. H. & P. C. Miller (eds.). 2006. Project-Based Second and Foreign Language Education: Past, Present and Future. Greenwich, CT: Information Age Publishing, Inc. * 5.Brown, H. D. 1994/2001. Principles of Language Learning and Teaching(3rd edn). Pearson Hall Regents/外语教学与研究出版社。 6.Brown, J. D. 1995/2001. The Elements of Language Curriculum: A Systematic Approach to Program Development. Heinle & Heinle Publishers/外语教学与研究出版社。 7.Carrol, D. W. 1999/2000. Psychology of Language (3rd edn). Brooks/Cole Publishing Company/外语 教学与研究出版社。 8.D?rnyei, Z. 2007. Research methods in applied linguistics: Quantitative, qualitative, and mixed methodologies. Oxford Univeristy Press. * 9.Doughty, C. J. & M. H. Long (eds.) 2003. The Handbook of Second Language Acquisition. Blackwell Publishing, Ltd. * 10.James, C. 1998/2001. Errors in Language Learning and Use: Exploring Error Analysis. Addison Wesley Longman Limited/外语教学与研究出版社。 11.Kellerman, E. & M. Sharwood Smith (eds.) 1986. Crosslinguistic Influence in Second Language Acquisition. Pergamon Press Ltd. * 12.MaCarthy, M. 2001/2006. Issues in Applied Linguistics. Cambridge University Press/世界图书出版公 司北京公司。* 13.Richards, J. C. & D. Nunan. 1990/2000. Second Language Teacher Education. Cambridge University Press/外语教学与研究出版社/人民教育出版社。 14.Richards, J. C. 1998/2001. The Context of Language Teaching. Cambridge University Press/外语教学 与研究出版社/人民教育出版社。 15.Robinson, P. (ed.) 2001. Cognition and Second Language Instruction. Cambridge University Press. * 16.Skehan, P. 1989. Individual Differences in Second-language Learning. London: Edward Arnold. 17.Stern, H. H. 1983. Fundamental Concepts of Language Teaching. Oxford: Oxford University Press. 18.V anPatten, B. & J. Williams (eds.). 2007. Theories in Second Language Acquisition: An Introduction. Mahwah, New Jersey: Lawrence Erlbaum Associates Publishers. * 19.Wallace, M. J. 1998/2000. Action Research for Language Teachers. Cambridge University Press/人民 教育出版社/外语教学与研究出版社。* 20.Williams, M. & R. L. Burden. 1997/2000. Psychology for Language Teachers. Cambridge University Press/外语教学与研究出版社/人民教育出版社。 提示:加*号者为重点书目

最新-演讲比赛一等奖视频 读书演讲比赛一等奖视频 精品

演讲比赛一等奖视频读书演讲比赛一等奖视频3月19日,我校七(2)学生王宇婷、七(3)唐琦、八(2)班学生滕建丽、八(5)刘淑珍在昌乐县初中第三届主题阅读,快乐成长读书演讲比赛的活动中,分别获得一等奖和二等奖的好成绩.整个比赛在全县四十多名选手中展开了激烈的角逐. 最终,王宇婷同学用自己优秀的综合素质,精彩的演讲、饱满的打动了在场的所有评委,以全场最高分获得了胜利.唐琦、滕建丽、刘淑珍也以精彩的演讲荣. 获二等奖. 特此报喜!营丘镇中学篇二:市管学校教师我的中国梦读书演讲比赛获奖名单附件:市管学校教师我的中国梦读书演讲比赛获奖名单篇三:读书演讲比赛西南庄小学读书演讲比赛方案一、指导思想:为了迎接世界读书日,引导学生多读书、读好书,为学生终生发展奠定较好的人文知识基础,为新一轮课程改革提供有效服务,在各完小校选拔的基础上,决定开展小学生读书演讲比赛活动.二、参赛对象:四年级学生三、演讲内容:各类健康向上的儿童读本,作为本次读书活动的指定阅读书目. 演讲内容围绕书香溢童这一主题,结合以上书目的阅读,有感而发,切入点小,联系实际.可以是书评,可以是读后感,也可以叙述读书过程中发生的小故事等等. 演讲题目自拟.四:比赛时间:2019年4月22日五:比赛地点:教室六、比赛要求:1、学生报名参加比赛. 2.写出自己的参赛稿.3、参赛作品要紧扣主题,演讲时间不超过5分钟. 七、评分标准:1、演讲内容:主题鲜明突出,内容生动,条理清晰,逻辑严密,结构精巧,立意新颖,有真情实感.(40分)2、语言表达:普通话标准,语调自然优美,表达流畅,语言生动,语速适中,正确把握演讲节奏富有,能够脱稿. (30分)3、表情仪态:感情充沛,精神饱满,着装得体,仪态端庄、自然大方.(20分)4、整体效果:时间控制适当,结构紧凑完整,富有韵味和表现力,引发听众共鸣. (10分)八、奖励办法:本次演讲比赛设一等奖、二等奖、三等奖若干名,对获奖选手颁发奖品.读书节演讲比赛活动总结为培养孩子良好的读书习惯,让

2018年人民大学语言学及应用语言学考研参考书目

2018年人民大学语言学及应用语言学考研参考书目 考研参考书目 中国文学基础: 《文学理论新编(修订版)》陈传才中国人民大学出版社1999年 《中国古代文学史》游国恩人民文学出版社 《中国现代文学史》程光炜中国人民大学出版社 《中国当代文学史》洪子诚北京大学出版社 《中国文学史》游国恩人民文学出版社 《外国文学简编(欧美部分)》黄晋凯中国人民大学出版社 汉语言基础: 《语言学纲要》叶蜚声、徐通锵北京大学出版社 1997 《语言学概论》胡明扬、沈阳、贺阳语文出版社 2000年 《古代汉语》古代汉语教研室编中国人民大学出版社1998年 《现代汉语》黄柏荣廖序东高等教育出版社 1999年 《古代汉语》王力中华书局 1999年 专业规划 新祥旭教研室对专业课解析:正是由于专业课比较复杂,只要考生认真研究专业课,把握专业课考试的本质和规律,那么,专业课的复习就能够事半功倍。 第一阶段:确定专业 在这一阶段,同学必须要根据三个最重要的因素来正确选择报考专业。这三个重要因素是: 1.本科专业与研究生专业的匹配程度 2.对研究生专业真实的喜好程度 3.对研究生专业天然的擅长程度 第二阶段:进行通用知识点学习

在专业确定后,关注ruc考研圈公众号还需更多时间考虑更多因素才能最终决策报考学校。在定了专业却没定学校的这段时期,很多同学不会进行专业课学习,因为他们认为同一专业,不同学校考的不一样,在没有最终确定学校之前,无法开始学习。其实这是一个错误的认识。因为,虽然不同学校同一专业学习内容不全相同,但只要是同属于一个专业,无论哪个学校所的考查范围,一定有20%左右的知识点是重叠的。这不同学校都一致要求掌握的20%相同知识点,我们称之为通用知识点。通用知识点往往是基础层面的知识点,也就是在未定学校之前就应该开始学习的专业课内容。 我们在确定专业后,就应采用特殊方案锁定通用知识点,然后针对通用知识点,进行2轮预热理解与1轮初始记忆。 第三阶段:确定学校 这一阶段,同学应综合多种因素来正确选择报考学校。影响学校选择的因素很多,但最重要的一个因素是你自身的考试能力。同一专业,不同学校竞争强度不同,越好的学校越难考,你有多强的考试能力你就可以考多好的学校。但是自己的考试能力未来能强到什么程度,自己力所能及的最好学校是那所学校,所以确定学校的一个重要因素就是充分的了解自己的实力。 第四阶段:进行全范围知识点学习 在报考学校确定后,同学应利用四种资料(专业招生目录、历年真题、公开指定参考书籍和内部默认学习资料)来确定专业课需要复习的全部知识点范围。当专业课可能考查的全部知识点确定之后,考生应该将全范围知识点快速理解认知1轮,然后针对通用知识点阶段未学部分再重点理解认知2至3轮。 第五阶段:将全范围知识点划分为3至5个重要层次 在这一阶段,同学应通过三种渠道(目标硕士点研究生群体、目标硕士点导师群体和特殊资源渠道)采集信息,分析确定各部分知识点的大致考试概率,然后根据考试概率的高低将所有知识点划分为3至5个重要层次。 第六阶段:针对不同重要层次的知识点进行不同程度的学习 最重要知识点群,必须要再次深刻理解与深度记忆5至7轮,平均每个知识点解题训练不少于6道题。较重要知识点群,应该再次理解与记忆3至5轮,平均每个知识点解题训练约4道题。非重要知识点群,只需浏览性理解2轮,适当解题训练即可。 第七阶段:针对难点内容的集中学习 总体而言,专业课前六个阶段的学习任务中,第一阶段可以靠自己有效完成80%。第二、三阶段只有60%的任务能靠自己有效完成,但第四五六阶段必须依靠强化课程辅导和大量外部资源才能真正学习到位。 专业课参考书目使用方法 1.走马观花法。较适合有指定参考书的第一轮阅读。刚开始看,很难把握重点,看的太细,会浪费时间。而且,第一遍看完之后,往往都是只有一个大概的轮廓,细节部分是很难记住的。目标:短时间内对专业课内容有一个全局的把握,以利于第二遍的深入阅读。这对于跨专业考研的人来说更为重要。 2.笔记法。看完一节或一章,对主要内容进行概括。尤其是把重要的知识点用简练的语言概括出来,列成条目——再复习时节约时间,记忆起来更为容易。更何况:手过一遍,赛过口过十遍。笔记法能加深我们对知识的理解和记忆。 3.提纲挈领法。除了熟练掌握各知识点外,还要注重各部分基本理论知识的内在联系,将基本理论知识有机地联系起来,而不是孤立地、零散地掌握即可,

公安系统演讲比赛获奖作品

公安系统演讲比赛获奖作品 大家好! 我叫xx,xx派出所民警。这几年来,在组织与同志们的关心支持下,我取得了一些成绩,但与许多同志、与许多优秀党员相比还存在着很大差距。因此今天谈不上做先进事迹报告,主要趁这个机会和同志们一吐肺腑之言,谈些切身体会。 对待工作,热爱+奉献 如果说当初从警是因为年少时期对惩凶除恶这一英雄主义的懵懂认识与渴望,那么经过六年的实战锤炼,我对警察这份职业有了更进一步的理解。记得当初迈出校门、加入警察行列后,我发现警察的生活并不象书中和影视剧中描写的那样充满刺激和挑战,接触的是社会阴暗面,处理的是平凡琐碎小事,超负荷的工作后还得不到修整,理想与现实的差距让我感到诸多的不适应。但我是个轻易不服输的人,在领导和同志们的帮助下,在父母的支持下,我在较短的时间内适应了公安工作的节奏,随着经验的日积月累,工作越来越得心应手,我从当初怀疑自已选择慢慢地“热恋”上了这份职业。 工作中处理的虽都是些小事,但正是这些小事使我一次又一次地感到了警察这一职业的崇高。~年##月#日,我在执勤时,群众送来了一名与家人走散的幼儿,由于语言表达能力较差,小

孩说不出家庭住址及父母联系方法,只知道父母在农贸市场工作。当时,我一面找来玩具、零食安顿孩子,一面对西湖区的农贸市场逐个进行了寻找,最后终于将孩子送到了他父母手中,第二天,激动万分的父母专门赶制了锦旗送到了派出所。 ~年#月,在我将一辆失窃的自行车主动上门送还到事主的单位后,事主连声讲“我以为自行车偷了,再也不可能回来了,没想到你们居然找到了,还亲自送到我单位,真想不到,你们警察真厉害!”,当时,在一旁的学生们都用崇敬的眼光看着自己,纷纷表示长大要当人民警察,此时的我感到由衷的自豪与骄傲。记得有首歌中唱道“警察的承诺是苦、警察的承诺是乐,”是的,警察的工作的确苦,但群众的满意就是对我们最大的肯定。公安工作的性质决定了我们要比一般的人付出更多。我常年吃住在所,所里有事随叫随到,节假日也难得回家,既便是传统的春节也顶多在家里呆两三天,常年的住所习惯让我一离所就不自觉地有种牵挂。我曾给自已算过,一天固定工作时间是##小时,每月还要额外加班###个小时,这样一天实际工作##个小时。 工作几乎是我生活的全部,我最希望的就是什么时候能痛痛快快地睡上两天。因为忙于工作,无暇照顾父母,我总在想以后一定要好好孝敬父母,只是这笔债越欠越多,不知什么时候才能

各大学英语研究生入学考试方向,科目及参考书目

各大学英研方向,考试科目及参考书目 中国人民大学外国语学院英语语言文2008年研究生入学专业目录研究方向:01英美文学02英语语言学与英语教学03翻译理论与实践04英语国家文化 03翻译理论与实践 科目一政治101 科目二226二外俄语或227二外日语或228二外德语或229二外法语 科目三基础英语618 科目四翻译理论与实践822 1 《当代西方翻译理论探索》廖七一译林出版社2000 2 《跨文化交际》金惠康中国对外翻译出版公司2003 3 《英汉翻译手册》倜西、董乐山商务印书馆2002 4 二外法语:《新大学法语》李志清高等教育出版社2003 5 二外德语:《新求精德语》初级1、2;中级1 王晓明同济大学出版社2003 6 二外日语:中日交流《标准日本语》初级上下、中级上人民教育出版社 7 二外俄语:《大学俄语简明教程》张宝钤钱晓蕙高等教育出版社 北京航空航天大学外语语言系英语语言文学2009年研究生入学专业目录研究方向:01 英美文学02 比较文学 初试科目:①101政治②222俄语二外或223日语二外或224德语二外或225法语二外 ③721基础英语④822英美文学 北京航空航天大学外语语言系英语语言文学2009年研究生入学参考书目 822 英美文学 《英国文学简史》河南人民出版社(1993年4月)刘炳善 《英国文学选读》上海译文出版社(1981年)杨岂深 《美国文学简史》南开大学出版社(2004年3月第二版)常耀信 《美国文学选读》南开大学出版社(2002年9月) 常耀信 721 基础英语不根据某一种教科书命题 北京交通大学人文与社会科学学院英语语言文学2008年研究生入学专业目录研究方向:01英美文学研究02翻译理论与实践03西方文论研究04 浪漫主义文学研究初试科目:①101政治②211德语(二外)或212俄语(二外)或213法语(二外)或214 日语(二外)③610语言学与英美文学④851专业综合考试 复试备注:复试科目: 笔试部分:高级词汇、文论评述、散文翻译、论文(提供材料,写出一篇小论文) 口试部分:语言学、文学专业知识面试、(二外)听力和口语 北京交通大学人文与社会科学学院英语语言文学2008年研究生入学参考书目 610语言学与英美文学 《语言学教程》(修订版)北京大学出版社胡壮麟主编 《美国文学简史》南开大学出版社常耀信《英国文学简读教程》清华大学出版社宫玉波851专业综合考试 《英美文化基础教程》北京外研社朱永涛《实用翻译教程》高教出版社冯庆华 北京理工大学外国语学院英语语言文学2008年研究生入学专业目录研究方向:01 英语文体学02 英美文学03 翻译理论与实践

相关文档
最新文档