Top-k Learning to Rank Labeling Ranking and Evaluation

Top-k Learning to Rank Labeling Ranking and Evaluation
Top-k Learning to Rank Labeling Ranking and Evaluation

Top-k Learning to Rank:Labeling,Ranking and Evaluation Shuzi Niu,Jiafeng Guo,Y anyan Lan,Xueqi Cheng

niushuzi@https://www.360docs.net/doc/4c11320273.html,,{guojiafeng,lanyanyan,cxq}@https://www.360docs.net/doc/4c11320273.html,

Institute of Computing Technology,Chinese Academy of Sciences,Beijing,P.R.China

ABSTRACT

In this paper,we propose a novel top-k learning to rank framework,which involves labeling strategy,ranking model and evaluation measure.The motivation comes from the dif-?culty in obtaining reliable relevance judgments from human assessors when applying learning to rank in real search sys-tems.The traditional absolute relevance judgment method is di?cult in both gradation speci?cation and human assess-ing,resulting in high level of disagreement on judgments. While the pairwise preference judgment,as a good alter-native,is often criticized for increasing the complexity of judgment from O(n)to O(n log n).Considering the fact that users mainly care about top ranked search results,we propose a novel top-k labeling strategy which adopts the pairwise preference judgment to generate the top k order-ing items from n documents(i.e.top-k ground-truth)in a manner similar to that of HeapSort.As a result,the com-plexity of judgment is reduced to O(n log k).With the top-k ground-truth,traditional ranking models(e.g.pairwise or listwise models)and evaluation measures(e.g.NDCG)no longer?t the data set.Therefore,we introduce a new rank-ing model,namely FocusedRank,which fully captures the characteristics of the top-k ground-truth.We also extend the widely used evaluation measures NDCG and ERR to be applicable to the top-k ground-truth,referred asκ-NDCG andκ-ERR,respectively.Finally,we conduct extensive ex-periments on benchmark data collections to demonstrate the e?ciency and e?ectiveness of our top-k labeling strategy and ranking models.

Categories and Subject Descriptors

H.3.3[Information Search and Retrieval]:Retrieval models

General Terms

Algorithms,Performance,Experimentation,Theory Keywords

Learning to Rank,Top-k,Preference Judgment,Evaluation 1.INTRODUCTION

In the past few years,learning to rank has been widely rec-ognized as an important technique for information retrieval (IR).A vital part to employ learning to rank in real search systems is the acquisition of reliable and high quality labeled datasets,both for training and evaluation.In traditional IR literature,assessors are requested to determine the relevance of a document under some pre-de?ned gradations,which is called absolute relevance judgment method.However,there are some signi?cant drawbacks for this evaluation process. Firstly,the speci?cs of the gradations(i.e.how many grades to use and what those grades mean)must be de?ned,and it is not clear how these choices will a?ect relative performance measurements[26].Secondly,the assessing burden increases with the complexity of the relevance gradations;the choice of label is not clear when there are more factors to consider, leading to high level of disagreement on judgments[4]. Recently pairwise preference judgment has been investi-gated as a good alternative[20,26].Instead of assigning a relevance grade to a document,an assessor looks at two pages and judges which one is https://www.360docs.net/doc/4c11320273.html,pared with ab-solute relevance judgment,the advantages lie in that:(1) There is no need to determine the gradation speci?cations as it is a binary decision.(2)It is easier for an assessor to ex-press a preference for one document over the other than to assign a pre-de?ned grade to each of them[7].(3)Most state-of-the-art learning to rank models,pairwise or list-wise,are trained over preferences.As noted by Carterette et al.[7],“by collecting preferences directly,some of the noise associated with di?culty in distinguishing between di?er-ent levels of relevance may be reduced.”Although prefer-ence judgment likely produce more reliable labeled data,it is often criticized for increasing the complexity of judgment (e.g.from O(n)to O(n log n)[20]),which poses a big chal-lenge in wide use.Do we actually need to judge so many pairs for real search systems?If not,which pairs do we choose?How to choose?These questions become the origi-nal motivation of this paper.

As we know,in real Web search scenario,it is well ac-cepted that users mainly care about the top results[30].In other words,the ordering of the top results(typically the results on the?rst one or two pages)is critical for users’search experience.It indicates that a labeling strategy shall take e?ort to?gure out the top results and judge the prefer-ence orders among them,but pay less attention to the exact

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

SIGIR’12, August 12–16, 2012, Portland, Oregon, USA.

Copyright 2012 ACM 978-1-4503-1472-5/12/08...$15.00.

preference orders among the rest results.Based on this ob-servation,we propose a novel top-k labeling strategy which adopts the pairwise preference judgment to generate the top k ordering items from a set of n items in a manner similar to that of HeapSort.The obtained ground-truth from this top-k labeling strategy is a mixture of the total order of the top k items,and the relative preferences between the set of top k items and the set of the rest n?k items,referred as top-k ground-truth.With this top-k labeling strategy,we can not only capture enough information for learning to rank [30],but also largely reduce the complexity of judgment to O(n log k).

With top-k ground-truth,we?nd that traditional ranking models,either pairwise or listwise,are no longer suitable for the labeled data set.It is natural to introduce a mixed rank-ing model,with the listwise model capturing the total order of the top k items and the pairwise model capturing the rel-ative preference between the set of top k items and the set of the rest n?k items.Such a mixed model can thus combine the advantages of both pairwise and listwise approaches to fully exploit the information in the top-k ground-truth.We refer such a mixed ranking model as FocusedRank,since it emphasizes more on the ordering of the top items.

For evaluation,traditional IR evaluation measures(e.g., MAP,NDCG and ERR),which are mainly de?ned on the absolute judgment,cannot be directly applied to the top-k ground-truth.To address this problem,we extend NDCG and ERR toκ-NDCG andκ-ERR by taking a function of the position of items as the absolute relevance label.The pro-posed evaluation measures thus emphasize the importance of the ordering of the top k items.Unlike the evaluation measures based on preference judgments[6],κ-NDCG and κ-ERR keep the same form as NDCG and ERR thus enjoy all the merits of traditional IR evaluation measures. Finally,we conduct extensive experiments on benchmark data collections.Major experimental?ndings include:(1) With top-k labeling strategy,the time cost on labeling one pair is much less than that on one item in absolute rele-vance judgment,and the overall time cost is comparable with that in absolute relevance judgment.(2)With top-k labeling strategy,the level of agreement on judgments is higher than that on absolute relevance judgment.(3)With FocusedRank,the ranking performance is signi?cantly bet-ter than the state-of-the-art pairwise and listwise ranking models.

To sum up,we propose a top-k learning to rank frame-work1,a novel and complete framework including labeling strategy,ranking model and evaluation measures.Our main contributions are as follows:

1.We propose a novel top-k labeling strategy which adopts

pairwise preference judgment to obtain reliable ground-

truth for learning to rank.As a result,the complexity

of judgment is reduced to O(n log k).

2.We introduce a new ranking model named Focuse-

dRank to capture the characteristics of the top-k ground-truth,and it outperforms the state-of-the-art pairwise

and listwise ranking models on benchmark datasets.

1Note that Xia et al.also mentioned the top k ranking prob-lem in[30].The di?erence is that they focus on the ranking models under the circumstances of traditional labeling strat-egy and evaluation measure.

3.We derive two new evaluation measures namedκ-NDCG

andκ-ERR applicable to the top-k ground-truth.They

both emphasize the importance of top-k ordering and

enjoy the merits of traditional IR measures with the

similar formulation.

2.RELATED WORK

In this section,we brie?y review some related work on labeling strategy,ranking model and evaluation measure in learning to rank literature.

2.1Labeling Strategy

In learning to rank,labeling strategies can be divided into two categories:absolute judgment and relative judgment [21,26,32,7].

In absolute judgment,assessors are usually requested to assign a graded score to an item independent of the other items[15,4,27,28,29]under some pre-de?ned gradations. Such a labeling strategy has been widely adopted in both industry and academia to construct benchmark datasets in IR,e.g.TREC data sets(since2000),Microsoft learning to rank datasets[18]and Yahoo!Learning to Rank chal-lenge2010data set.One di?cult problem in using absolute judgment is to clearly de?ne the speci?cs of the gradations, i.e.how many grades to use and what those grades mean. Some previous studies tried to?gure out the proper num-ber of relevance gradations[22,19].However,as noted by Cox[12],“there is no single number of response alternatives for a scale which is appropriate under all circumstances”. If the descriptions of each degree are not clearly de?ned,a multi-grade judgment method can be easily misused in the evaluation process[34].Moreover,the assessing burden in absolute judgment increases with the complexity of the rele-vance gradations.When there are more factors to consider, the choice of label is not clear,resulting in high level of disagreement on judgments[4].

In contrast,relative judgement aims to directly judge the relative order of a set of items[26].As a typical form of rel-ative judgment,pairwise preference judgment asks assessors to express a preference for one item over the other[6].One concern of using this strategy is the complexity of judgment since the number of item pairs is polynomial in the num-ber of items.Carterette et al.[7]attempted to reduce the number of pairs for judging by using transitivity of relevance among documents.Nir Ailon[1]proposed a formal pairwise method based on QuickSort which can reduce the number of preference judgments from O(n2)to O(n log n).Compared with O(n)in absolute judgment,this is still not a?ordable for assessors.To increase the e?ciency of relative judgment, R.Song et al.[26]further proposed to select the best one each time from the remaining items,which is only applicable to small datasets.Di?erent from the above related work,we propose a novel top-k labeling strategy to largely save the ef-fort of preference judgment,by exploiting what Web search users actually care about on ranking.

2.2Ranking Model

So far learning to rank has been mainly addressed by pointwise,pairwise,and listwise ranking models.In point-wise models[17],ranking is transformed to regression or classi?cation on individual items to represent the absolute label on each item.In pairwise models[14,11,3],ranking is transformed to classi?cation on item pairs to represent the

preference between two items.In listwise models[33,31,5], instances as document lists are generated through the com-parison over item pairs,and it is superior in modeling more discriminative judgments.Therefore,we can conclude that, pointwise models is well suited for absolute relevance judg-ment;while both pairwise and listwise models are applicable in either absolute or relative judgment scenario.

In previous work[30],Xia et al.extended three listwise ranking models,namely top-k ListMLE,top-k ListNet and top-k RankCosine,to?t the top-k scenario.Note that they addressed this scenario under the circumstances of tradi-tional labeling strategy and evaluation measure.They con-ducted experiments on the top-k ListMLE,and claimed that the top-k ListMLE can outperform traditional pairwise and listwise ranking models.However,it cannot avoid the com-putational complexity on the entire permutation such as that in top-k ListNet.

In our paper,we take the above ranking models as the baselines to show the superiority of FocusedRank.

2.3Evaluation Measure

To evaluate the e?ectiveness of a ranking model,many IR measures have been proposed.Here we give a brief in-troduction to several popular ones which are widely used in learning to rank.See also[16]for other measures. Precision@k[2]is a measure for evaluating top k positions of a ranked list using two grades(relevant and irrelevant)of relevance judgment.With Precision as the basis,Average Precision(AP)and Mean Average Precision(MAP)[2]are derived to evaluate the average performance of a ranking model.

While Precision considers only two graded relevance judg-ments,Discounted Cumulated Gain(DCG)[13]is an eval-uation measure that can leverage the relevance judgment in terms of multiple ordered categories,and has an explicit position discount factor in its de?nition.By normalizing DCG@k with its maximum possible value,we will get an-other popular measure named Normalized Discounted Cu-mulated Gain(NDCG).

To relax the additive nature and the underlying indepen-dence assumption in NDCG,another evaluation measure, Expected Reciprocal Rank(ERR),is proposed in[8].It implicitly discounts documents which are shown below very relevant documents,and is de?ned as the expected recipro-cal length of time that the user will take to?nd a relevant document.

Although MAP,NDCG and ERR are widely used in IR, they all adopt absolute relevance labels in their formulation, which imposes restrictions on direct application in the sce-nario of relative judgment.Therefore,new measures such as bpref,ppref,and nwppref[6]have been proposed.How-ever,these measures have not been widely accepted by IR community.In this paper,we extend traditional IR evalua-tion measures to our relative judgment scenario with similar formulation.

3.TOP-K LEARNING TO RANK

In this section,we will introduce our top-k learning to rank framework in detail,which involves the labeling strategy,the ranking model and the evaluation measure.

3.1Top-k Labeling Strategy

According to previous work and the above discussions,pairwise preference judgment is superior to traditional ab-solute relevance judgment in the acquisition of reliable judg-ments from human assessors.However,it is often criticized for increasing the complexity of judgment.In this section, we propose a novel top-k labeling strategy by exploiting what Web search users actually care about on ranking.Our labeling strategy can not only capture enough information for learning to rank,but also largely save the e?ort of pref-erence judgment.

3.1.1Motivation

In real Web search applications,users usually pay more at-tention to the top-k items[9,25,10].For example,according to a user study[30],in modern search engines,about62%of search users only click on the results within the?rst pages, and90%of search users click on the results within the?rst three pages.It shows that the ordering of the top k results is critical for users’search experience.Two ranked lists of results will likely provide the same value to users(and thus su?er the same loss),if they have the same ranking results for the top positions[30].Moreover,a good ranking on the top results is much more important than a good ranking on the others.Therefore,a labeling strategy shall take e?ort to?gure out the top k results,judge the preference orders among them carefully,but pay less attention to the exact preference orders among the rest results.

3.1.2Labeling Strategy

Based on the above analysis,we propose a novel top-k labeling strategy using the pairwise preference judgment as the basis.The basic assumption for our labeling strategy is the transitivity of preference judgments of relevance[24, 7].That is,if i is preferred to j and j is preferred to k,the assessor will also prefers i to k.With this assumption,our labeling strategy generate the top k ordering items from a set of n items in a manner similar to that of HeapSort.It mainly takes the following three steps:

Step1Randomly select k items from the set of n items and build a min-heap with t as the root based on the pairwise preference judgments by assessors.Here a min-heap is a complete binary tree with the property: if B is a child node of A,A is less relevant than B. Step2Randomly select an item r from the rest n?k items and the preference judgement is conducted between t and r by assessors.We then update the heap if nec-essary and obtain a new min-heap with the k most relevant items up till now.It is repeated until all the items have been selected.

Step3Sort the?nal k items in the min-heap in a descending order,and append the rest items after the k items. The detailed labeling algorithm is shown in Algorithm1. With the above top-k labeling strategy,the obtained ground-truth is a mixture of the total order of the top k items and the relative preference between the set of top k items and the set of the rest n?k items,referred to as top-k ground-truth.

3.1.3Complexity Analysis

Here we consider the judgment complexity of each step in top-k labeling strategy:

(1)The judgment complexity of building a min-heap with k items in Step1is O(k);

Algorithm1:Top-k Labeling based on HeapSort

1Input:(1)D,an item set;(2)k,top item number.

2begin

3randomly select k items from D denoted as D k.

4construct a min-heap H k over D k with preference judgment.

5for each item d in D?D k do

6obtain preference judgment over pair(d,D k[1]).

7if the judgment is d more relevant than D k[1]

8D k[1]=d,

9update H k over D k with preference judgment.

10end if

11end for

12sort H k to obtain top k items in descending order denoted as L D.

13append D?D k to L D.

14end

15Output:L D.

(2)The judgment complexity in Step2is O((n?k)log k) according to the complexity analysis for HeapSort;

(3)The judgment complexity in Step3is O(klogk). Therefore,the total judgment complexity of top-k label-ing strategy is about O(n log k).Compared with QuickSort strategy adopted by Nir Ailon[1]for preference judgment, our top-k labeling strategy signi?cantly reduces the com-plexity from O(n log n)to O(n log k),where usually k n. The judgment complexity of our strategy is nearly compa-rable with that of the absolute judgment(i.e.O(n)).Ex-perimental results in the following section also verify the e?ciency of our labeling strategy which is consistent with the theoretical analysis.

3.2Top-k Ranking Model

With the top-k ground-truth obtained from our labeling method,traditional ranking models no longer?t the labeled dataset.On one hand,pairwise ranking models can cap-ture the information of the relative preference,but fail to model the total order of the top k items since they ignore the position information.On the other hand,listwise rank-ing models can capture the information of the total order, but su?er from the great computational complexity due to a large undi?erentiated item set in top-k ground-truth.To ad-dress this problem,we propose FocusedRank,a mixed rank-ing model with listwise ranking model capturing the total order of the top k items and pairwise ranking model cap-turing the relative preference between the set of top k items and the set of the rest n?k items.Such a mixed model can thus combine the advantages of both pairwise and list-wise approaches to fully exploit the information in the top-k ground-truth.

3.2.1Notations

Given m training queries{q i}m i=1,let x i={x(i)1,···,x(i)n i} be the items associated with q i,where n i is the number of documents of this query,T i be the set of top k items and F i be the set of other n i?k items.Denote the total order of T i as a permutationπi,whereπi(x(i)j)stands for the position of item x(i)j∈T i.Denote P i={(x(i)u,x(i)v): x(i)u∈T i,x(i)v∈F i}as the set of pairs constructed between the set of top k items and the set of the rest n?k items.We relate the top-k ground-truth to relevance labels by de?ning

y i={y(i)1,···,y(i)n

i }as position-aware relevance labels of

the corresponding items,and y(T)

i

={y(i)j:x(i)j∈T i.}.In

our paper,we use y(i)j=k+1?πi(x(i)j),if x(i)j∈T i,and

y(i)j=0,otherwise.That is,suppose k=10,the relevance

labels for the top10items are de?ned in a descending order

from10to1,while the relevance labels for the rest items

are de?ned as0.

3.2.2FocusedRank

In FocusedRank,we adopt a listwise loss to model the

total order of top k items,and a pairwise loss to model the

preference of top k items to the other items.The general

loss function of FocusedRank on a query q i is presented as

follows2.

L(f;q i)=β×L list(f;T i,y i)+(1?β)×L pair(f;P i,y i),(1)

where L list stands for a listwise ranking model and L pair

stands for a pairwise ranking model,βis a trade-o?coef-

?cient to balance the two terms.As examples,we com-

bine three popular listwise ranking models(i.e.SVM MAP,

AdaRank and ListNet)with three popular pairwise ranking

models(i.e.RankSVM,RankBoost and RankNet)respec-

tively to get three speci?c forms of FocusedRank,namely

FocusedSVM,FocusedBoost and FocusedNet accordingly.

(1)FocusedSVM:RankSVM plus SVM MAP

Both RankSVM[14]and SVM MAP[33]apply the SVM

technology to optimize the number of misclassi?ed pairs and

the average precision,respectively.Therefore,we combine

these two ranking models together to get a new Focuse-

dRank method,named FocusedSVM.Speci?cally,RankSVM

is adopted to model the pairwise preference of P i,and SVM MAP

is employed to model the total order of T i.Therefore,L list

and L pair in Eq.(1)has the following speci?c form,respec-

tively.

L list=max

z(T)

i

(1?AP(z(T)i,y(T)i)+w TΨ(z i,T i)?w TΨ(y(T)i,T i))

L pair=

(x(i)

u,x

(i)

v)∈P i

max{0,1?(y(i)u?y(i)v)(w T x(i)u?w T x(i)v)}.

Like RankSVM and SVM MAP,FocusedSVM can then be

formulated as an optimization problem as follows.

min

1

2

w 2+C

m

i=1

[βζ(i)+(1?β)

(x(i)

u,x

(i)

v)∈F i

ξ(i)u,v]

s.t.:(y(i)u?y(i)v)(w T x(i)u?w T x(i)v)≥1?ξ(i)u,v,?(x(i)u,x(i)v)∈P i,

w TΨ(y(T)i,T i)?w TΨ(z i,T i)≥1?AP(y(T)i,z(T)i)?ζ(i),?z(T)i,

ξ(i)u,v≥0,ζ(i)≥0,i=1,...,m,

where z T i stands for any incorrect label andΨis the same

as that in SVM MAP.Similar to SVM,1

2

w 2controls the

complexity of the model w,and C is a trade-o?parameter

between the model complexity and hinge loss relaxations.

(2)FocusedBoost:RankBoost plus AdaRank

Both RankBoost[11]and AdaRank[31]adopt the boost-

ing technology to output a ranking model by combining the

week rankers,where the combination coe?cients are deter-

mined by the probability distribution on document pairs

and ranked lists respectively.Hence we combine these two

2In application,L

list and L pair should be normalized to a

comparable range,and we adopt this trick in our experi-

ments.

Algorithm2:Learning Algorithm for FocusedBoost

1Input :training data in terms of top-k ground-truth.

2Given :initial distribution D 1on T i and D 1on P i

,i =1,···,m .3For t =1,···,T 4train weak ranker f t to minimize:r t =β m i =1D t (T i )L list +(1?β) m i =1 (x (i )u ,x (i )v )∈P i

D t (x (i )u ,x (i )v )L pair .5choose αt =12log(1+r t

1?r t ).6

update D t +1=1Z t +1

D t (T i )exp(?

E ( t s =1αs f s ,T i ,y T i )),D t +1

=

1Z t +1

D t (x (i )u ,x (i )v )exp(αt (w T x (i )u ?w T x (i )

v )),

where,Z t +1= m i =1D t (T i )exp(?E ( t s =1αs f s ,T i ,y T i )).Z t +1= m i =1

(x (i )u ,x (i )v )∈P i D t (x (i )u ,x (i )v )exp(αt (w T x (i )u ?w T x (i )v )).7

Output :f (x )=

t αt f t (x ).

ranking models together to get a new FocusedRank method,named FocusedBoost.Speci?cally,RankBoost is adopted to model the preference of P i ,and AdaBoost is to model the total order of T i .Therefore,L list and L pair in Eq.(1)has the following speci?c form,respectively.

L list =exp (?E (f,T i ,y T

i )),

L pair =exp (?(y (i )u ?y (i )v )(w T (x (i )u ?x (i )

v ))),

where E (f,T i ,y (i )

i )stands for the IR evaluation to optimize.As in RankBoost and AdaRank,the detailed algorithm is shown in Algorithm2.

(3)FocusedNet:RankNet plus ListNet

Both RankNet and ListNet aim to optimize a cross en-tropy between the target probability and the modeled prob-ability.The probability of the former is de?ned based on the exponential function of di?erence between the scores of any two documents in all document pairs given by the scoring function f .The probability of the latter is the permutation probability of a ranking list using Plackett-Luce model [23],which is also based on the exponential function.Hence we combine these two ranking models together to get a new Fo-cusedRank method,named FocusedNet.Specially,RankNet is adopted to model the pairwise preference of P i ,and List-Net is to model the total order of T i .Therefore,L list and L pair in Eq.(1)has the following speci?c form,speci?cally.

L list =?

σ∈Σi

P y (T )i

(σ)log P z (T )i

(σ),

L pair =?ˉP

u,v log P u,v (f )?(1?ˉP u,v )log(1?P u,v (f )),where,ˉP u,v =1,if y (i )u ≥y (i )v ,and ˉP

u,v =0,otherwise.In addition,we have that

z (T )

i

={z (i )j =w T x (i )j ,x (i )

j ∈T i },

P u,v (f )=

exp(w T x (i )

u ?w T x (i )

v )1+exp(w T x (i )

u ?w T x (i )

v )

,

P s (σ)=

|T i |

j =1

φ(s σ(j )) |T i |

l =j φ(s σ(l ))

,s =y (T )i ,z (T )

i ,where s σ(j )denotes the score of the object at position j of permutation σ.

3.3Top-k Evaluation Measure

As aforementioned,traditional IR evaluation measures (e.g.,

MAP 3,NDCG and ERR)are mainly de?ned on the abso-lute judgment,thus cannot be directly applied to the top-k ground-truth.To address this problem,we extend NDCG and ERR to κ-NDCG and κ-ERR by taking the position of items as the absolute label using the way de?ned in Sec-tion 3.2.1.As a result,the derived evaluation measures ac-tually emphasize the importance of the ordering of the top k items.

3.3.1κ-NDCG

We ?rst give the precise de?nition of NDCG as follows.

NDCG @l =1N l l

j =12r j ?1

log 2(1+j )

,

where r j is the relevance label of the item with position j in

the output ranked list,and N l is a constant which denotes the maximum value of NDCG@l given the query.

Using the notations in Section 3.2.1,we can extend NDCG to κ-NDCG with the following de?nition.

κ?NDCG @l =1N

l l

j =12y (i )

j ?1

log 2(1+j )

,(2)

where N l is a constant which denotes the maximum value

of κ-NDCG@l given the query.

3.3.2κ-ERR

We ?rst give the precise de?nition of ERR as follows.

ERR =n i =11

n R (r i )

i ?1 j =1

(1?R (r j )),R (r )=2r ?12r max ,where n is the document number of a query,and r max is the

highest relevance label in this query.

Similar as κ-NDCG,we can extend ERR to κ-ERR with the following de?nition.

κ?ERR =n s =11n i R (y (i )

s )s ?1 t =1

(1?R (y (i )t ),R (r )=2r ?12y (i )max ,(3)

where y (i )

max is the relevance label in the top position,as

de?ned in Section 3.2.1.

4.EXPERIMENTAL RESULTS

In this section,we empirically evaluate our proposed top-k labeling strategy and ranking model.Firstly,we con-ducted user studies to compare the e?ectiveness and e?-ciency of our top-k labeling strategy with traditional abso-lute judgment based on the dataset from the Topic Distilla-tion task of TREC2003.Secondly,we compared Focuse-dRank with its corresponding traditional ranking models and top-k ListMLE [30]based on both the ground-truths from the absolute judgment and the top-k ground-truths.The experimental results show the superiority of our label-ing strategy and ranking model to previous work.

3

Since MAP is mainly designed for binary judgment sce-nario,we omit the modi?cation on it.

Table1:Comparison results of time e?ciency

Method Time per judgment(s)Time per query(min)Judgment complexity#Judgments per query Top-k labeling 5.5113.13O(n log k)142.76

Five-grade judgment13.8711.78O(n)50

4.1Top-k Labeling

To study whether top-k labeling is“easier”to make than absolute relevance judgments,we compare the Top-k label-ing strategy(i.e.,k=10)with the popular?ve-graded (“bad”,“fair”,“good”,“Excellent”,“perfect”)absolute rel-evance judgment method.We investigate which one is a better judgment method under two basic metrics,namely time e?ciency[26],and agreement among assessors[7]. 4.1.1Experiment Design

We describe our experimental design from the following four aspects:

Data Set:We adopted all the50queries from the Topic Distillation task of TREC2003as our query set.For each query,we then randomly sampled50documents from its associated documents for judgment.Existing Web pages in corpus of TREC2003were employed in labeling to avoid the time delay in downloading content from Internet.In TREC topics,most of the queries have clear intent in the form of query descriptions,which are also exhibited along with the queries in labeling for better understanding.

Labeling Tools:We designed labeling tools for two judg-ment methods separately,i.e.,the top-k labeling tool T1and the traditional?ve-graded relevance judgment tool T2.In T1,a query with its description is shown at the top,and two associated Web pages are placed at the main area.An assessor is then asked to decide which one is more relevant. In T2,a query with its description is shown at the top fol-lowed with?ve grade buttons,and a Web page is placed at the main area.An assessor is asked to decide which grade should be assigned to that page.A timer is introduced to both tools for computing time per judgment.Assessors can click the clock button to stop the timer if they want to have a break or leave for a while.This will ensure the computing accuracy.

Assessors:There are?ve assessors participating our user study.These assessors are all graduate students who are familiar with Web search.They all received a training in advance on how to use the tools and on the speci?cations of the?ve grades.

Assignment:To make the comparison valid,the assign-ment should meet the following requirements.Firstly,for each method,all the selected documents should be judged at least once to obtain a complete data set.Secondly,for each assessor,he/she is most likely to memorize some information on the documents for a given query after judging.There-fore,to compare the methods independently,we shall ensure that each assessor will not see the same query under di?er-ent tools to minimize the possible order e?ect.Finally,each tool has to be utilized by all the assessors to avoid the possi-ble di?erences between individuals.Therefore,we adopted the following assignment to satisfy all the requirements:(1)

A B A~B A?B

A B0.67490.27660.0485

A~B0.11380.81980.0664

A?B0.10470.37790.5174

Table2:Assessor agreement for preference judgments in top-k labeling results

A B A~B A?B

A B0.62720.29130.0815

A~B0.28250.52320.1944

A?B0.15340.38260.4640

Table3:Assessor agreement for inferred preference judg-ments in?ve-graded labeling results

all the50queries are divided into?ve folds{Q i}5i=1,where each fold has10queries;(2)for i=1,...,4,each assessor U i judges Q i with T1and Q i+1with T2,and the assessor U5judges Q5with T1and Q1with T2.At least such two di?erent assignments are needed to compute the agreement among assessors.

4.1.2Evaluation Results and Discussions

Two basic metrics are utilized to evaluate the labeling strategies.

(1)Time e?ciency:Time e?ciency[26]is used to mea-sure the cost of labeling.It is dependent on the time per judgment and the complexity of judgment(the total num-ber of judgments for each query).Statistically,less time per judgment suggests easier judgment.

(2)Agreement:Here we measure the agreement be-tween two assessors over all judgments as[7]to average out di?erences in expertise,prior knowledge,or understanding of the query.For comparison,we inferred preferences from the absolute judgements:if the judgment on item A was greater than the judgment on item B,we inferred that A was preferred to B(denoted by A B).A tie between A and B is denoted by A~B.

As shown in Table1,it is obvious that the average time per judgment in absolute judgments is longer than that of the preference judgments in top-k labeling strategy,e.g.about 2~3times.The results veri?es the common sense that the preference judgment is easier than absolute judgment. Meanwhile,from the average number of judgments conducted on each query,we can?nd that the top-k labeling strategy will take more judgments than the absolute judgments,at a scale around the theoretical value log k(i.e.k=10).Most importantly,we can see that the total judgment time spent on each query is comparable between the two methods.The results indicate that by adopting the top-k labeling strat-egy,the complexity of pairwise preference judgment becomes similar to that of the absolute judgment.Therefore,it is fea-sible to use top-k labeling in practice.

The agreement among assessors for preference judgment

SVM MAP0.40060.40390.41110.41280.41650.42170.42660.43220.43630.44190.3146 RankSVM0.41180.40580.40510.40990.41730.42160.42670.43200.43800.44470.3178 FocusedSVM0.40600.40830.40770.40790.41230.41790.42340.42990.43510.44000.3196 AdaRank0.37890.39410.39550.40240.40660.41190.41690.42250.42870.43450.3061 RankBoost0.39720.40510.40620.40950.41500.41970.42600.43160.43740.44380.3101 FocusedBoost0.39470.40980.41100.41320.41640.42350.42820.43170.43680.44220.3199 ListNet0.41140.41100.41340.41530.42040.42430.43000.43320.43890.44420.3206 RankNet0.40130.40860.40760.41180.41560.42110.42670.43300.43790.44510.3157 FocusedNet0.39680.41030.41260.41530.41910.42450.43010.43410.44030.44590.3223 Top-k ListMLE0.40570.40910.41150.41430.41880.42470.42930.43460.43920.44430.3168

Top-k MQ2007(Top-k judgments)

Methodsκ-N@1κ-N@2κ-N@3κ-N@4κ-N@5κ-N@6κ-N@7κ-N@8κ-N@9κ-N@10κ-ERR SVM MAP0.45300.50230.53890.57020.59510.61780.63460.64790.65910.66900.6227 RankSVM0.45450.49320.53150.56640.59300.61350.63060.64450.65640.66550.6205 FocusedSVM0.45390.50870.54480.57730.60240.62240.64040.65270.66460.67390.6287 AdaRank0.38960.44380.47450.50620.53580.55750.57770.59520.60890.61900.5637 RankBoost0.44380.48250.52430.55670.58500.60660.62340.63610.64760.65710.6131 FocusedBoost0.45290.49430.53120.56330.59180.61230.62880.64200.65230.66280.6187 ListNet0.45410.49370.53140.56690.59220.61320.62990.64190.65300.66130.6195 RankNet0.44900.48750.52690.55860.58650.60770.62490.63900.65120.66030.6143 FocusedNet0.46230.51080.54600.57830.60280.62400.64090.65290.66330.67350.6336 Top-k ListMLE0.45700.49910.53560.57190.59690.61760.63390.64760.65850.66730.6228 Table4:Performance comparison on Graded MQ2007and Top-k MQ2007

in top-k labeling and for inferred preference judgment in absolute judgment is shown in Table2and Table3,respec-tively.Each cell(X1,X2)is the probability that one asses-sor would say X2(column)given that another assessor said X1(row).Therefore,they are row normalized.From the results,we can see that by adopting preference judgment, top-k labeling can largely improve the agreement among assessors over the absolute judgment.The overall agree-ment among assessors reaches74.5%under top-k labeling, while it is only54.7%under absolute judgment.We con-ductedχ2test to compare the ratio of the number of pairs agreed on to the number disagreed on for both top-k la-beling and absolute judgment.The di?erence is signi?cant (χ2=420.7,d f=1,p<0.001).We also investigate the agreement among assessors only on preference pairs(by ig-noring ties),where the agreement ratio is89.7%under top-k labeling,and83.1%under absolute judgment.We test the di?erence between the two methods using the ratio of agreed preference pairs to disagreed preference pairs,which is also signi?cant(χ2=28.5,d f=1,p<0.001).

From the above results,we can conclude that the top-k labeling strategy is both e?cient and e?ective to obtain reliable judgments from human assessors,as compared with traditional absolute judgment.

4.2Performance of FocusedRank

In this section,we empirically evaluate the performance of our proposed ranking model,i.e.FocusedRank.Specially,we conducted extensive experiments to compare FocusedRank with di?erent state-of-the-art ranking models based on both the ground-truths from the absolute judgment and the top-k ground-truths.Note that k is set to10in our experiments.Besides,we also investigated the impact of the balance factor βin our proposed FocusedRank.

4.2.1Experimental Settings

For comparison,we constructed two datasets,each with both the absolute judgment and top-k labeling.One dataset comes from the benchmark LETOR4.0collection.There are two homologous datasets with di?erent labeling in LETOR4.0, one is referred to as MQ2007with three-graded relevance judgments,and the other is MQ2007-list with the total or-der judgments as the ground-truth.The two datasets share the same queries.The only di?erence lies in that the docu-ments of a query in MQ2007is the subset of the documents of the corresponding query in MQ2007-list.Thus the in-tersection of two document sets on each query is adopted. Those from MQ2007comprise the ground-truth with abso-lute judgments,referred as Graded MQ2007.While those from MQ2007-list become the top-k ground-truth by only preserving the total order of top k documents on each query, referred as top-k MQ2007.

The other dataset is the one manually constructed in pre-vious user study experiments with50queries from the TREC-2003Topic Distillation task.The one with the?ve-graded absolute relevance judgments is denoted as Graded TD2003, and the one with top-k labeling is denoted as Top-k TD2003. We divided each dataset into?ve subsets,and conducted 5-fold cross-validation.In each trial,three folds were used for training,one fold for validation,and one fold for testing. For RankSVM and SVM MAP the validation set in each trial was used to tune the coe?cient C.For RankNet and List-Net it was used to determine the number of iterations.For our FocusedRank,the validation set was used to tune the balance factorβ.

SVM MAP0.50550.53560.53350.53740.55150.55840.56690.57430.57630.58010.5102 RankSVM0.50190.54090.54460.56570.58160.58290.58670.58470.59050.59910.5072 FocusedSVM0.51260.55050.54980.56330.56420.56560.56420.57610.58400.58850.5129 AdaRank0.52780.54360.56190.57400.57740.57970.58450.58720.59440.59820.5132 RankBoost0.52300.50940.51790.53620.54560.55070.55550.56230.56400.57500.5119 FocusedBoost0.54860.52750.53560.55320.55540.57060.57900.58560.58760.59550.5466 ListNet0.52290.54800.55140.56630.57360.58040.58850.59490.59440.59850.5225 RankNet0.49710.53040.56120.56380.56420.57650.58170.58370.58950.59830.5059 FocusedNet0.52650.56600.56420.57060.57800.58040.58480.58980.59480.60820.5660 Top-k ListMLE0.49940.51900.53560.55040.55400.57170.57460.57980.58610.58850.5083

Top-k TD2003(Top-k judgments)

Methodsκ-N@1κ-N@2κ-N@3κ-N@4κ-N@5κ-N@6κ-N@7κ-N@8κ-N@9κ-N@10κ-ERR SVM MAP0.22480.28220.28840.29730.32710.33590.34590.36060.37690.38580.3839 RankSVM0.22460.28930.29210.30830.32530.33870.35610.37030.37940.38720.4025 FocusedSVM0.22430.29650.29580.31920.32350.34150.36620.38010.38190.38860.4041 AdaRank0.20290.29790.29950.31160.32890.33790.34270.35740.36560.37660.3777 RankBoost0.20820.29310.29210.31030.32550.33650.35760.36540.37820.37890.3970 FocusedBoost0.25830.31680.31240.31910.33460.35170.36340.37320.38610.39800.4214 ListNet0.26850.27310.26940.27890.30440.31420.32520.34040.35140.36240.3962 RankNet0.27760.29460.29460.30210.32270.33720.34700.36880.37630.38130.4199 FocusedNet0.24730.32680.32370.32230.35070.36050.37050.37690.38490.40580.4603 Top-k ListMLE0.23890.28610.30550.32220.34800.35520.36680.37020.39130.40070.4028 Table5:Performance comparison on Graded TD2003and Top-k TD2003

Besides,in our experiments,when applying traditional ranking models on top-k ground-truths,we relate the top-k ground-truth to absolute labels as de?ned in Section3.2.1. While applying top-k ranking models(i.e.FocusedRank and top-k ListMLE)on absolute judgment datasets,we ran-domly generate a total order of the documents according to graded labels and preserve the top-k order for learning. To measure the e?ectiveness of ranking performance,NDCG [13]and ERR[8]are used on ground-truths from absolute judgments,whileκ-NDCG andκ-ERR are adopted on top-k ground-truths.

4.2.2Comparison results

The performance comparison between di?erent ranking models on the two datasets is shown in Table4and Table 5,respectively.

From the results on the Graded MQ2007as shown in the upper part of Table4,we can see that the overall perfor-mance of FocusedRank is comparable with traditional pair-wise and listwise ranking models in terms of both NDCG (N@j)and ERR.It shows that even though FocusedRank is proposed for the top-k ground-truth,it can work quite well on traditional absolute judgment datasets under tradi-tional IR measures.Such results also reveals that,learning the ordering of the top items well is critical for the success of a learning to rank algorithm.Similar results can also been found on the Graded TD2003datasets as shown in the upper part of Table5.

From the results on top-k ground-truths in both tables (i.e.the bottom parts),we can?nd that FocusedRank can signi?cantly outperform the corresponding pairwise and list-wise ranking models in terms of bothκ-NDCG(κ-N@j)and κ-ERR.For example,considering FocusedBoost,the relative improvement over AdaRank and RankBoost is about7.08% and0.87%in terms ofκ-NDCG@10,respectively,and the relative improvements in terms ofκ-ERR is about9.76%and 0.91%,respectively.Besides,we can also observe that un-der all the metrics,the best performance is almost reached by our FocusedRank(The best performance is denoted by number in bold).The results indicate that FocusedRank is particularly suitable for the top-k ground-truth.By combin-ing the advantages of both pairwise and listwise approaches, FocusedRank can fully exploit the information in the top-k ground-truth and thus outperforms each single model. Moreover,when we compare FocusedRank with the state-of-the-art top-k ranking model,i.e.,top-k ListMLE,we can see comparable performances on both absolute judgment datasets and top-k ground-truths.In fact,some of our Fo-cusedRank model,e.g.FocusedNet,can consistently outper-form top-k ListMLE on Top-k MQ2007in terms of both κ-NDCG andκ-ERR.The results demonstrate that Focuse-dRank,as a mixed ranking model,can e?ectively cope with the top-k learning to rank problem.

4.2.3The impact of the balance factorβ

Here we investigate the impact of the balance factorβin FocusedRank.By varyingβfrom0to1with a step of 0.05,the curves of the ranking performance of FocusedRank in terms ofκ-NDCG4andκ-ERR are shown in Figure1 and Figure2.Each performance value on test sets shown in the?gures is averaged using?ve-fold cross validation as the same way used in LETOR.

In Figure1,the performance variations on graded MQ2007 are represented as curves with open symbols while that on 4For space limitation,we just show the results of@5,@10 for NDCG andκ-NDCG.

0.0.0.0.0.0.0.0.β28

3032

343638404244

(a)

FocusedSVM 0.0.0.0.0.0.0.β2832

36

40

44

(b)

FocusedBoost 0.0.0.0.0.0.β

28

3032

3436

3840

4244

(c)FocusedNet

Figure 1:Performance variation of FocusedRank with βon Graded MQ2007and Top-k of MQ2007under corresponding

evaluation measures.

Top-k MQ2007are represented as curves with ?lled sym-bols.The results of FocusedSVM,FocusedBoost and Fo-cusedNet are shown in Figure 1(a),(b)and (c),respec-tively.To make the variation trend more clear,each ?gure adopts double y -axes,where the black curves of κ-NDCG@5(κ-N@5),κ-NDCG@10(κ-N@10)and κ-ERR use the left y axis,while the other blue curves of NDCG@5(N@5),NDCG@10(N@10)and ERR utilize the right y axis.Simi-larly,in Figure 2we also use double y -axes.The performance variations on graded TD2003are represented as curves with open symbols while that on Top-k TD2003are represented as curves with ?lled symbols.The results of FocusedSVM,FocusedBoost and FocusedNet are shown in Figure 2(a),(b)and (c),respectively.

From the results shown in Figure 1and Figure 2,we ?nd that:(1)There is a consistent trend 5for the three types of FocusedRank on the two groups of di?erent datasets.That is,as βincreases,the ranking performance ?rst grows to reach its maximum,and then drops.(2)The overall vari-ation of each performance curve is small.Take the most varied curves for example,the variance of the mean is 2.7%for the performance curve of κ-NDCG@5on Top-k MQ2007,and the variances of the mean is 6.2%for that on Top-k TD2003.

Thus,we come to a conclusion that the performance of FocusedRank is relative stable with respect to β.

5.CONCLUSIONS

In this paper,we propose a novel top-k learning to rank framework,including labeling,ranking and evaluation,which can be e?ectively adopted for real search systems.Firstly,a top-k labeling strategy is proposed to obtain reliable rele-vance judgments from human assessors via pairwise prefer-ence judgment.With this labeling strategy,we can largely reduce the complexity of pairwise preference judgment to O (n log k ).Secondly,a novel ranking model FocusedRank is 5

The small size of the datasets may be the reason for the non-smooth variation curves with 50queries on Graded TD2003and Top-k TD2003,compared with more than one thousand queries on Graded MQ2007and Top-k MQ2007.

presented to capture the characteristics of the top-k ground-truth.Thirdly,two new top-k evaluation measures are de-rived to ?t the top-k ground-truth.We verify the e?ciency and reliability of the proposed top-k labeling strategy through user studies,and demonstrate the e?ectiveness of top-k rank-ing model by comparing with state-of-the-art ranking mod-els.

There are many interesting issues for further investigation under our top-k learning to rank framework.(1)The top-k labeling strategy could be improved to further reduce the judgment complexity.For example,we may introduce the “Bad”judgment like [7]for pages that are clearly irrelevant to further save labeling e?ort.(2)With top-k ground-truth,the design for new ranking models remains a valuable prob-lem to investigate.(3)It is also possible to ?nd new top-k based evaluation measures for better comparison between di?erent systems.

6.ACKNOWLEDGMENTS

This research work was funded by the National Natural Science Foundation of China under Grant No.60933005,No.61173008,No.61003166and 973Program of China under Grants No.2012CB316303.

7.REFERENCES

[1]N.Ailon and M.Mohri.An e?cient reduction of

ranking to classi?cation.COLT ’08,pages 87–98,2008.[2]C.Buckley and E.M.Voorhees.Retrieval system

evaluation ,chapter TREC:experiment and evaluation in information retrieval.MIT press,2005.

[3]C.Burges,T.Shaked,and et al.Learning to rank

using gradient descent.ICML ’05,pages 89–96,2005.[4]R.Burgin.Variations in relevance judgments and the

evaluation of retrieval performance.IPM ,28:619–627,1992.

[5]Z.Cao,T.Qin,T.-Y.Liu,M.-F.Tsai,and H.Li.

Learning to rank:from pairwise approach to listwise approach.ICML ’07,pages 129–136,2007.

0.0.0.0.0.0.β42

44

46

48

5052

5456

5860

(a)FocusedSVM

0.0.0.0.0.0.0.0.0.β48

5052

54

56

5860

62(b)

FocusedBoost 0.0.0.0.0.0.β

50

5254

56

58

6062(c)FocusedNet

Figure 2:Performance variation of FocusedRank with βon Graded TD2003and Top-k TD2003under corresponding evaluation

measures.

[6]B.Carterette and P.N.Bennett.Evaluation measures

for preference judgments.SIGIR ’08,pages 685–686,2008.

[7]B.Carterette,P.N.Bennett,D.M.Chickering,and

S.T.Dumais.Here or there:preference judgments for relevance.ECIR’08,pages 16–27,2008.

[8]O.Chapelle,D.Metlzer,Y.Zhang,and P.Grinspan.

Expected reciprocal rank for graded relevance.CIKM ’09,pages 621–630.ACM,2009.[9]S.Cl′e men?c on and N.Vayatis.Ranking the best

instances.JMLR ,8:2671–2699,2007.

[10]D.Cossock and T.Zhang.Subset ranking using

regression.Learning theory ,4005:605–619,2006.

[11]Y.Freund,R.Iyer,R.E.Schapire,and Y.Singer.An

e?cient boosting algorithm for combining preferences.JMLR ,4:933–969,2003.

[12]E.P.C.III.The optimal number of response

alternatives for a scale:A review.Journal of Marketing Research ,17,No.4:407–422.[13]K.J ¨a rvelin and J.Kek ¨a l ¨a inen.Ir evaluation methods

for retrieving highly relevant documents.SIGIR ’00,pages 41–48,2000.

[14]T.Joachims.Optimizing search engines using

clickthrough data.KDD ’02,pages 133–142,2002.[15]J.Kek ¨a l ¨a inen.Binary and graded relevance in ir

evaluations-comparison of the e?ects on ranking of ir systems.IPM ,41:1019–1033,2005.

[16]A.Mo?at and J.Zobel.Rank-biased precision for

measurement of retrieval e?ectiveness.ACM Trans.Inf.Syst.,27:2:1–2:27,2008.

[17]L.P.,B.C.,and W.Q.Mcrank:learning to rank

using multiple classi?cation and gradient boosting.In NIPS2007,pages 845–852.

[18]T.Qin,T.-Y.Liu,and et al.Letor:A benchmark

collection for research on learning to rank for information https://www.360docs.net/doc/4c11320273.html,rmation Retrieval ,13:346–374,2010.

[19]T.R.,S.W.Jr.,and V.J.L.Towards the

identi?cation of the optimal number of relevance categories.JASIS ,50:254–264,1999.

[20]

K.Radinsky and N.Ailon.Ranking from pairs and triplets:information quality,evaluation methods and query complexity.WSDM ’11,pages 105–114,2011.[21]

F.Radlinski and T.Joachims.Query chains:learning to rank from implicit feedback.KDD ’05,pages 239–248,2005.

[22]F.G.Rebecca and N.Melisa.The neutral point on a likert scale.Journal of Psychology ,95:199–204,1971.[23]P.R.L.The analysis of permutations.Applied Statistics ,24(2):193–202,1974.

[24]M.Rorvig.The simple scalability of documents.JASIS ,41:590–598,1990.

[25]C.Rudin.Ranking with a p-norm push.In COLT ,pages 589–604,2006.

[26]

R.Song,Q.Guo,R.Zhang,and et al.

Select-the-best-ones:A new way to judge relative relevance.IPM ,47:37–52,2011.

[27]

E.M.Voorhees.Variations in relevance judgments and the measurement of retrieval e?ectiveness.SIGIR ’98,pages 315–323.ACM,1998.

[28]

E.M.Voorhees.Variations in relevance judgments and the measurement of retrieval e?ectiveness.IPM ,36:697–716,2000.

[29]E.M.Voorhees.Evaluation by highly relevant documents.SIGIR ’01,pages 74–82.ACM,2001.

[30]F.Xia,T.-Y.Liu,and H.Li.Statistical consistency of top-k ranking.In NIPS ,pages 2098–2106,2009.[31]J.Xu and H.Li.Adarank:a boosting algorithm for information retrieval.SIGIR ’07,pages 391–398,2007.[32]Yao.Measuring retrieval e?ectiveness based on user preference of documents.JASIS ,46:133–145,1995.[33]

Y.Yue,T.Finley,F.Radlinski,and T.Joachims.A support vector method for optimizing average precision.SIGIR ’07,pages 271–278,2007.

[34]

B.Zhou and Y.Yao.Evaluating information retrieval system performance based on user preference.JIIS ,34:227–248,2010.

新版人教版高中语文课本的目录。

必修一阅读鉴赏第一单元1.*沁园春?长沙……………………………………毛泽东3 2.诗两首雨巷…………………………………………戴望舒6 再别康桥………………………………………徐志摩8 3.大堰河--我的保姆………………………………艾青10 第二单元4.烛之武退秦师………………………………….《左传》16 5.荆轲刺秦王………………………………….《战国策》18 6.*鸿门宴……………………………………..司马迁22 第三单元7.记念刘和珍君……………………………………鲁迅27 8.小狗包弟……………………………………….巴金32 9.*记梁任公先生的一次演讲…………………………梁实秋36 第四单元10.短新闻两篇别了,“不列颠尼亚”…………………………周婷杨兴39 奥斯维辛没有什么新闻…………………………罗森塔尔41 11.包身工………………………………………..夏衍44 12.*飞向太空的航程……………………….贾永曹智白瑞雪52 必修二阅读鉴赏第一单元1.荷塘月色…………………………………..朱自清2.故都的秋…………………………………..郁达夫3.*囚绿记…………………………………..陆蠡第二单元4.《诗经》两首氓采薇5.离骚………………………………………屈原6.*《孔雀东南飞》(并序) 7.*诗三首涉江采芙蓉《古诗十九首》短歌行……………………………………曹操归园田居(其一)…………………………..陶渊明第三单元8.兰亭集序……………………………………王羲之9.赤壁赋……………………………………..苏轼10.*游褒禅山记………………………………王安石第四单元11.就任北京大学校长之演说……………………..蔡元培12.我有一个梦想………………………………马丁?路德?金1 3.*在马克思墓前的讲话…………………………恩格斯第三册阅读鉴赏第一单元1.林黛玉进贾府………………………………….曹雪芹2.祝福………………………………………..鲁迅3. *老人与海…………………………………….海明威第二单元4.蜀道难……………………………………….李白5.杜甫诗三首秋兴八首(其一) 咏怀古迹(其三) 登高6.琵琶行(并序)………………………………..白居易7.*李商隐诗两首锦瑟马嵬(其二) 第三单元8.寡人之于国也…………………………………《孟子》9.劝学……………………………………….《荀子》10.*过秦论…………………………………….贾谊11.*师说………………………………………韩愈第四单元12.动物游戏之谜………………………………..周立明13.宇宙的边疆………………………………….卡尔?萨根14.*凤蝶外传……………………………………董纯才15.*一名物理学家的教育历程……………………….加来道雄第四册阅读鉴赏第一单元1.窦娥冤………………………………………..关汉卿2.雷雨………………………………………….曹禹3.*哈姆莱特……………………………………莎士比亚第二单元4.柳永词两首望海潮(东南形胜) 雨霖铃(寒蝉凄切) 5.苏轼词两首念奴娇?赤壁怀古定风波(莫听穿林打叶声) 6.辛弃疾词两首水龙吟?登建康赏心亭永遇乐?京口北固亭怀古7.*李清照词两首醉花阴(薄雾浓云愁永昼) 声声慢(寻寻觅觅) 第三单元8.拿来主义……………………………………….鲁迅9.父母与孩子之间的爱……………………………..弗罗姆10.*短文三篇热爱生

心愿作文700字完美版

心愿作文700字 风来,蒲公英散了;秋来,树叶落了。夕阳下我们奔跑的身影已慢慢退去,还记得我给你指过的那颗最亮的星星吗?我已经让它把我的话悄悄带入你的梦乡。 小学的六年,我最稚嫩的时期,刚转学过来的我,不愿将心事坦露。感谢你这些年来的帮助与关怀。现在我终于明白了那句话:一场考试,考散了一群人。我至今也不能理解,生活六年的同学为什么要分开?可人在慢慢地长大,也明白了新的生活在继续,新的事物也在靠近。你还记得,当年我们的约定吗?我们说二十年后要一起到巴黎铁塔下许下新的梦,因为你认为那是一件美好浪漫的事。 开学了,我认识了好多新朋友…… 那天,我走进707这个大家庭,那一张张陌生却纯真的脸,那一双双眼里充满着梦想,脸上洋溢着青春,身上散发着朝气。老师的话语中则带着满满的希望和期待,我们会拿出100分的状态,告诉老师:我们可以做得很好。看着新同学,我的骨子里不知从哪又冒出当年热血沸腾的那股冲劲。我知道追梦这条路上的坎坷和曲折,我将和新同学一起努力,因为我相信他们和我一样,就算汗水淌过脸庞,就算泪水划过侧脸,我们也绝不放弃,绝不低头。我们一定会闯过重重难关,一路披荆斩棘,最终走向目标,获得成功。就算跌倒100次,我也会华丽地站起;就算失败100次,在第101次我仍然会坚定地说:让我再试试!所以我会奔跑,一直不停地奔跑,就算是逆风,我也会努力。 不知道你的新同学怎么样?你是不是也和我一样,把身上的正能量传递给别人?还记得毕业分开那天,你对我说:杜福容,你不能哭你要一直微笑,因为哭了就不好看了,只有笑着才能拥有王者般的风范,才会像胜利者那样有着优雅的姿态。谢谢你,我的生命中有你的出现,我的成长中有你的陪伴。 当那颗星星走进你的梦中,麻烦你让它带着你最新的消息回来告诉我,好吗?

演讲稿:教师,我无悔的选择

教师,我无悔的选择 各位领导,各位老师: 大家好!今天我演讲的题目是:《教师,我无悔的选择》。 有一首诗里这么写道:我是一个老师,我把手中的红烛高高举起,只要我的鸽群能翱翔蓝天,只要我的雏鹰能鹏程万里,只要我的信念能坚如磐石,只要我的理想能永远年轻,我情愿燃烧我的每一寸躯体,让我的青春乃至整个的生命,在跳动的烛火中升华,在血液的鲜红中奔腾…… 我无悔的选择来自对理想的追求。我来自一个教师家庭,我的父母都是教师,在他们的熏陶下,我打小便确立了自己的理想——做一名光荣的人民教师!七年前,怀揣着美丽的梦想,我走上了教师岗位。刚参加工作的时候,我满腔热情地投入到教学当中,融入到学生当中,恨不得一下子把全部的知识都交给学生。清晨,当黎明的曙光洒进教室,我在三尺讲台旁聆听孩子们琅琅的读书声,那是天底下最动听的乐章,我仿佛在享受人间最美的旋律;黄昏,踏着最后一抹夕阳,我目送他们离校的背影;夜晚灯光下,看着孩子们那稚嫩的笔迹,在我严格的要求下逐渐成长,我深深感到:这就是我人生最高的追求,最大的安慰! 我无悔的选择来自对师爱的坚守。工作的道路并非一帆风顺,美丽的梦想也会偶尔搁浅,当我看到学生对再三讲过的知识还不明白、对再三纠正过的错误一犯再犯时,我着急了;当我看到一些学生淘气、调皮、不求上进、学习成绩一落千丈、自暴自弃的学生使人头疼万分,不知所措时,我连放弃的念头都有了。慢慢地,我发现,我的父母,还有身边的老教师,他们总能不急不躁地面对一切,用老师的温和引发出学生最善良的心,用表扬和鼓励培养班级积极向上的精神。他们的言行感动了我。望着他们呕心沥血几十年依旧孜孜不倦,看着他们青丝变白发依旧兢兢业业,我深刻地理解人们为什么总是这样描写教师:“捧着一颗心来,不带半根草去”,“春蚕到死丝方尽,蜡炬成灰泪始干”。于是,我下定决心:绝不放弃!每一颗草都有泛绿的时候,每一朵花都有盛开的理由,用爱去呵护他们,宽容他们,鼓励他们,为他们的每一次小小的进步而高兴。孩子是天使,我们就是为天使装点翅膀的人!至诚的师爱又唤回了我工作的激情,重新点燃了我美丽的梦想。

精选作文四年级下册语文各单元作文范文汇编

校园景色(1单元) 春姑娘悄悄地来到我们身边,冰雪融化,万物复苏,那青的草,绿的叶,各种色彩鲜艳的花,给校园构成一幅生机勃勃的春景图。 在校园的草坪上,可以看见嫩绿的小草从土壤里探出头来,高兴地望着我们,仿佛在说:“亲爱的同学们,我们又见面了!”一个个圆圆的蘑菇,在春雨的浇灌下,也露出了笑脸。 校园的主道两旁,杨树挺立在那里,一动不动,像一位站岗的哨兵。枝条上长出了圆圆的新叶,远远望去,像一个绿色的小桃子。那一朵朵美丽鲜艳的花儿,争齐开放,使人感到格外舒服。 每当下课的时候,悦耳的铃声传来,同学们就像一只只鸟儿一样“飞”出了教室,校园里顿时一片欢腾。兵乓球台前,围满了人,一阵阵叫好声,一阵阵欢呼声,使校园里充满了活力,充满了欢乐。 美丽的春天,美丽的校园,真是一幅美丽的画卷。 校园春色(1单元) 春姑娘悄悄地来到我们身边,冰雪融化,万物复苏,那青的草,绿的叶,各种色彩鲜艳的花,给校园构成一幅生机勃勃的春景图。 在校园的草坪上,可以看见嫩绿的小草从土壤里探出头来,高兴地望着我们,仿佛在说:“亲爱的同学们,我们又见面了!”一个个圆圆的蘑菇,在春雨的浇灌下,也露出了笑脸。 校园的主道两旁,杨树挺立在那里,一动不动,像一位站岗的哨兵。枝条上长出了圆圆的新叶,远远望去,像一个绿色的小桃子。那一朵朵美丽鲜艳的花儿,争齐开放,使人感到格外舒服。 每当下课的时候,悦耳的铃声传来,同学们就像一只只鸟儿一样“飞”出了教室,校园里顿时一片欢腾。兵乓球台前,围满了人,一阵阵叫好声,一阵阵欢呼声,使校园里充满了活力,充满了欢乐。 美丽的春天,美丽的校园,真是一幅美丽的画卷。 爸爸,我想对你说 爸爸,我想对你说:你的父爱我能感受到,但你能不能不要太严了呢? 每天,我都要写一篇日记,但有时候作业太多,你还硬逼着我写日记;你不知道,我每次写完作业都很累很累,好想看一下电视,放松一下,但你硬逼我写,没办法,我只好受点委屈了。其实,我有时很不想学,但你又非要我学不可,我又不能不听你的话,只好跟你学,你不知道,我不想学的时候,你讲的知识,我一点也没听进去,所以你等于白讲了。你还把我心爱的漫画书藏起来,买来一大堆的试卷,叫我学到哪课就做到哪课,那里我是多么想看漫画书啊!有时,我放学回家把电视机打开看,但你却叫我过去写作业,那时我是多么不乐意呀!因为你不让我看电视,而你却在那里看电视。你知道吗?我人虽然在这边,但心却飞到电视机那里去了,所以作业才会出现一些不该错的地方。每天晚上,我叫你讲故事给我听,但你却拒绝了,叫我自己看,但睡着看书容易损伤眼睛,所以又看不成,你不知道,我那时是多么想听你讲那动听的故事呀! 爸爸,你说过,我们既是父子也是朋友,那你能不能给我一个自由的天地,让我自由地发展呀!

六年级作文:心愿作文700字_1

心愿 每人都有自己的心愿,有些人心愿十分远大,大得像浩瀚的海洋,他们通常想要考取武汉二中的火箭班,还有的甚至已经计划考上硕士、博士、院士。有些人的心愿很小,小得像一粒沙土一样普通,小到想要吃一顿饱饭,或是拥有一本图画书。我的心愿可能大多数人都有,而且正在烦心——希望妈妈少一些唠叨。 由于我的成绩在班上还算不错,所以我的妈妈便给我立了一个“小目标”——考上武汉二中,而且还要是火箭班!因为这个“小目标”一直悬在我的脑门上,所以她一看到我不学习,或抵触学习,便像机关枪开火似的唠叨起来:“现在大家都在努力,俗话说‘逆水行舟,不进则退’,大家生怕自己成绩掉下去,便努力努力再努力,你不努力,再优秀到了初中也要落下来,还有……”每当听到这些话,我就感觉脑仁儿都要炸了,一副被抽了魂的样子,不管她说什么,我都只“哦”地回应一声。 有一次“谈心”,我又说:“妈妈,你不要一天到晚老是唠叨什么‘二中’不‘二中’,火箭不火箭的,我想减压呀!”妈妈一听我这句话,又开始“念紧箍咒”了:“什么?减压?笑话!要减压,做梦吧!你一减压,别人一看你松懈了,就抓紧学习,到时候,你会为你所谓‘减压’而后悔的!告诉你,二中火箭班里都是小学成绩顶尖的学生,你现在不把成绩搞好,以后到了火箭班又怎样?一样最后一名!除非放弃考取二中,否则,你就等着压力一层层压吧。” 我心中涌上一阵辛酸:现在的家长,不顾孩子感受,给孩子报补

习班,还在家里念“紧箍咒”,增大孩子的压力,不给孩子放松的时间。可他们什么时候考虑过,童年是一个人成长中最快乐的时期,可是,我的金色童年呢?我的阳光童年呢?没有!没有!我只看到书山,只看到题海。我只能在书山跋涉,在题海挣扎。 我真希望天下家长都不要再唠叨,多想想孩子的感受,让孩子感受到童年的快乐吧!这,就是我的心愿!

小学教师演讲稿无悔的选择无尽的追求

小学教师演讲稿--无悔的选择无尽的追 求 我曾听过这样的感叹,我生不逢时,没赶上英雄时代,要不我也会扬名天下!我也听过类似的抱怨,我时运不佳,没摊上个好岗位,否则咱也不想当孩子王。但我却要说,选择教师,我今生无悔,也是我毕生的追求。 96年的盛夏,带着对明天的憧憬,走上了三尽讲台,实现了我的梦想。虽是偏远,落后的杨溪村小,我却异常满足。一天到晚和孩子们打交道,我觉得很有意思,很有乐趣。天真活泼,心地善良的孩子们,在我的教育下,一天比一天懂事,一天比一天成熟。我看到了教育的力量,也深深地爱上了自己的事业。 爱岗敬业,诲人不倦,是对每位教育工作者最基本的要求,初涉教坛时,我深知在学校所学的知识远远不够。提高自身业务素质,异常重要。于是我的书桌上就多了些教育专著,教育教学杂志,读读、摘摘,博采他山石,琢为自家玉。平时,常向有经验的老师请教,听课,听讲座,学写论文,取他人之长,补已之短。 爱学生是教师的天职,没有爱就没有教育。我把爱镶在举手投足间,嵌在我的一颦一笑中,让学生时刻感受到了信任与鼓舞。我总是把学生看成自己的弟妹,不失时机地为贫

困的学生送一句安慰,为自卑的孩子送一份自信,为偏激的学生送一份冷静,让学生时刻生活在温暖中。 寒来暑往,风雨八年,我不曾为我的早出晚归而后悔,也不曾为我的挑灯夜战而遗憾。因为在我的学生身上,我看到了我的付出所开的花,所结的果。我从教虽短短八年,但在这八年里,凭着对事业的真诚迷恋,对学生的无比热爱,所教的每届学生都在进步,都在成才。我想,这与我的不懈追求是分不开的。 “衣带渐宽终不悔,为伊消得人憔悴”,在事业默默的耕耘中,我体验到了人生最大的幸福,每个教师节一封封热情洋溢的信,一张张饱含谢意的精致卡片,雪片似的从四面八方飞到我的身边。我的心里总是缀满了骄傲与自豪。我在心底里默默发誓,不为别的就为这些天真无邪的学生,我也要把工作干好,不求轰轰烈烈,但求踏踏实实;不求涓滴相报,但求今生无悔。 扩展阅读:怎样做一个受人敬仰的优秀教师 一、要有强烈的责任感。 衡量一个教师是否合格,最重要的一点就是看其有没有强烈的社会责任感。因为教育工作的根本意义在于通过培养合格的社会公民去优化和推动社会的发展。如果一个教师不

关于表扬领导工作认真的文章

关于表扬领导工作认真的文章 精品文档 关于表扬领导工作认真的文章 表扬,是对先进的思想、行为进行赞许和奖励,使之得到肯定并巩固和发扬。对于工作认真的领导做一各表扬,本文是学习啦小编整理的关于表扬领导工作认真的文章,仅供参考。 关于表扬领导工作认真的文章篇一:时不我待,精工出奇他英姿舒爽,干练利落,事事为人先;他追求完美,对工作细节严格要求。在他的带领和推动下,北京公司项目类工作飞速推进,短短小半年时间,顺利完成覆盖全国的项目战略规划布局,一个个优质的资源迅速累积。他带着北京公司项目中心的各位兄弟姐妹一起度过了2012年我们最为艰难的时期,每当风雨将临之际,他总满腹豪情,自信地对我们说:“跟着我干!”这就是北京公司项目总监——**。 北京公司坐落于北京清华科技园,一个在全国范围来说都是机会和竞争并存的战略要地。2012年,公司初建,当地社会资源匮乏、人员配备不齐,连项目和各类产品的基础条件几乎都处于空白。但这丝毫没有动摇老梁前进的信念和步伐,他发出豪言状语:“这里的舞台如此之大,怎能没有我们腾挪的空间!”没有社会支持,我们自己去找。人才不够,我们自己去招。项目条件不成熟,我们自己去思考、去摸索、去垫基。6个多月以来,他带着兄弟们找资源、聘人才、下基层、写材料……带着大家一起碰过无数钉子,熬过数 1 / 8 精品文档 十个不眠之夜。 天道酬勤!在他的带领下,如今北京公司项目中心已经和清华、北大等高等院校的经济专家们建立了优质的沟通渠道;和北京市沐浴行业协会等机构达成了战略合作关系;项目中心人员从2~3人扩张到了10多人;完成了北京公司项目整体基础构架方案和落地的各项执行标准;并顺利和优仕风、尚泉等企业合作、进入了项目的试运营阶段。这一切收获都离不开老梁的辛苦付出,每一位来访者、每一位客户、每一份文件,事无巨细,精确把控,亲力亲为。当和我们一起为每一份成功相互庆贺时,大家看着老梁,不禁感叹:“不禁一番寒澈骨,何来梅花傲骨香!” 关于表扬领导工作认真的文章篇二:又红又专、能文能武、上下认可他是我们的部门经理,高大帅气,英姿飒爽,干练利落,事事为人先;他追求完美,对工作细节严格要求。他是我们整个部门的领头羊,是我们的火车头,正因为有他的带领,我们这个团队才团结一致,在困难重重中完成一个又一个目标。 008年年初,**就来到了我们公司,这一干就是5年,从最初的土建工程师到现在的工程部经理,他一直诚信正直、表里如一、身先士卒、忠诚敬业。无论是周

人教版高中语文必修必背课文精编WORD版

人教版高中语文必修必背课文精编W O R D版 IBM system office room 【A0816H-A0912AAAHH-GX8Q8-GNTHHJ8】

必修1 沁园春·长沙(全文)毛泽东 独立寒秋, 湘江北去, 橘子洲头。 看万山红遍, 层林尽染, 漫江碧透, 百舸争流。 鹰击长空, 鱼翔浅底, 万类霜天竞自由。 怅寥廓, 问苍茫大地, 谁主沉浮。 携来百侣曾游, 忆往昔峥嵘岁月稠。

恰同学少年, 风华正茂, 书生意气, 挥斥方遒。 指点江山, 激扬文字, 粪土当年万户侯。 曾记否, 到中流击水, 浪遏飞舟。 雨巷(全文)戴望舒撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望逢着 一个丁香一样地

结着愁怨的姑娘。 她是有 丁香一样的颜色, 丁香一样的芬芳, 丁香一样的忧愁, 在雨中哀怨, 哀怨又彷徨; 她彷徨在这寂寥的雨巷,撑着油纸伞 像我一样, 像我一样地 默默彳亍着 冷漠、凄清,又惆怅。她默默地走近, 走近,又投出 太息一般的眼光

她飘过 像梦一般地, 像梦一般地凄婉迷茫。像梦中飘过 一枝丁香地, 我身旁飘过这个女郎;她默默地远了,远了,到了颓圮的篱墙, 走尽这雨巷。 在雨的哀曲里, 消了她的颜色, 散了她的芬芳, 消散了,甚至她的 太息般的眼光 丁香般的惆怅。 撑着油纸伞,独自

彷徨在悠长、悠长 又寂寥的雨巷, 我希望飘过 一个丁香一样地 结着愁怨的姑娘。 再别康桥(全文)徐志摩 轻轻的我走了,正如我轻轻的来; 我轻轻的招手,作别西天的云彩。 那河畔的金柳,是夕阳中的新娘; 波光里的艳影,在我的心头荡漾。 软泥上的青荇,油油的在水底招摇; 在康河的柔波里,我甘心做一条水草! 那榆荫下的一潭,不是清泉, 是天上虹揉碎在浮藻间,沉淀着彩虹似的梦。寻梦?撑一支长篙,向青草更青处漫溯, 满载一船星辉,在星辉斑斓里放歌。

以愿望为话题的700字作文

以愿望为话题的700字作文 或许我的愿望有一些不会实现,但多年以后当我盘点它们审视自己,我定然会躬身自省。下面是为大家的关于愿望的,一起来看看吧。 金秋来临,果园里传出阵阵香味。每一棵树上都硕果累累,苹果、梨子、桔子……各种各样的果娃娃都成熟了,扬起了笑脸,准备去实现自己的愿望。 我也住在这美丽的果园里。我是一个苹果。在树妈妈身上有许许多多和我长得一样的兄弟姐妹。此刻,我们正在大谈着自己的愿望。有一些姐妹准备到城里去光荣一回,有一些兄弟准备成为小动物的晚餐,有一些姐妹想就留在土壤中陪伴树妈妈一生,还有一些兄弟准备到农民伯伯的仓库里住下,而我——一只青色的小苹果则想到遥远的沙漠扎根,成为一片绿洲。 兄弟姐妹们已陆续实现了愿望,只剩下我还赖在树妈妈身上。树妈妈对我说:“孩子,你的哥哥姐姐,弟弟妹妹都已经实现了自己的愿望,你也不能赖在我身上了,你的愿望是什么呢?”我有点自豪的扬起头,对妈妈说:“我想到沙漠里扎根,成为一片比这儿更美的绿洲!”妈妈有点不相信,用疑问的语气对我说:“孩子,沙漠离这儿很远,你能承受住这一路的艰辛吗?”我咬了咬嘴唇:“能”。

妈妈点点头,把我抖落到了旁边的河中,我连再见都没来得及说,就开始了旅行。 我顺着河流向远方奔去。一路上都很平安,我和路边的树伯伯对阿姨们打着招呼,看着河上飞过的小鸟惬意极了。这平安的路没走多久。我就到了一处河流很急的路段,河床上还时不时会有一两块尖利的石头,“呀——”我惊叫一声便被旋涡卷了进去。我重重地摔到了河床上,痛得张大了嘴,比害虫咬我时还痛。我努力往上游,终于出来了。我游呀游。看见了前方有一个瀑布,我被冲到了下面。身上摔出了一个大口子,我痛得失去了知觉。 阳光暖暖的烤着大地,我在哪?还好,我被河水冲到了岸上。我到了一处可爱的草地,这没有树,只有嫩绿的软软的小草和漂亮的蝴蝶。在这儿生长也不错,我活动活动,伤口不痛了。虽然没有到我愿望中的沙漠,可这个地方也不错。我笑了笑,与周围的草打了个招呼,便开始了我漫长的生长过程…… “太好了!相信有了愿望石,世界将变得更加美好!”娜莉笑着说,娜灵在一旁也很开心地笑着。

鼓励的作文

鼓励的作文 导读:关于鼓励的作文600字1 鼓励,是学海中劈波斩浪的浆;鼓励,是人生中相互互持的拐仗;总在汹涌波涛中给予你无穷力量,总在低谷中增强你的自信。 有一次,一张“79”分的试卷映入我的眼帘,这可是我考试的“最高记录”呀!我不禁含泪想着:怎么办?这么低的分数怎么向妈妈交差呀!朋友也不相信我了:“哎呀!就连平常厉害的龙群道都考得这么低啊。”我羞愧的低下头。我拿着试卷回到家,妈妈看了我的试卷后,却改变了常态,不是向平常那样揍我了,而是拿着试卷耐心地鼓励我:“儿子,你这里怎么本来是7的,写成了4啊,白白扣去了两分,不过你不要灰心,失败是成功之母,就算失败了,也总会有成功的。”听了妈妈的话,我又充满了信心,迎接下次考试。不过我没有用实际行动来证明,只是天天看着电视发呆,没有好好复习。然而,又一个不理想的成绩映入我眼帘,我顿时泪如泉涌,老师鼓励我:“每一个成功者都有一个开始,勇于开始,才能找到成功的路。失败了一次也无妨,世界上又有几人能一次就是对的'呢?就像打仗,我们也不是输了很多场吗?最后也是成功了的,不成功的话我们也不会有今天的自在生活。”我立即擦干眼泪,更加自信了,连神仙也打不烂我结实的“信心墙”,就读起书来。功夫不负有心人,以后每次考试,我都考得了90分以上。 由此可见,老师口中一句简简单单的表扬也许就能激发孩子内心

潜在的上进心和自信心,在一个人的成长过程中没有什么比自信心更重要的了,而老师的鼓励往往能帮助孩子建立自信。 关于鼓励的作文600字2 音乐家,需要鼓励才会唱出美妙的歌曲;钢琴家,需要鼓励才会奏出轻盈的乐谱;商人,需要鼓励才会收获最大的利益;而我们也需要给自己一些鼓励,不要一味的鼓励别人,其实自己也是需要鼓励的。 六年级开始,一次偶然的机会,我接触了网球,从此便爱上了网球这项运动。刚开始学的时候,并不是很顺利,有很多困难,比如说接不到球,不会打反手,反应有些慢,不能够迅速的掌握球的位置。但是后来,我尽力的克服困难,终于修得了正果。这一切来源于我对自己的鼓励。 那是一个寒冷的冬天,由于天冷,我们在室内打球,那天只有我和馨馨去了,到那时候我还不是打球很好,仍有许多缺陷。今天老师教我们练习打球时的脚步,单独练时我还很是高兴,因为我练的很快,老师也表扬了我,可是真正打球的时候,我却跟不上发球的速度,一头雾水,老师和馨馨一直的在为我加油,鼓励我,可是我对自己一点信心都没有,所以连打了五六个球,结果都不是很理想,我有些要放弃了,这时换了馨馨打,她打的就很好,我就想,她可以,我为什么不可以?于是又轮到我打了,我给自己加足了劲,深信自己一定会成功,一直在鼓励自己,终于,在球落地的那一刻,我瞄准了球,准备,垫步,上前,挥拍,一切动作忽然变得如此简单,等球落地那一刹那,

人教版高中语文必修一背诵篇目

高中语文必修一背诵篇目 1、《沁园春长沙》毛泽东 独立寒秋,湘江北去,橘子洲头。 看万山红遍,层林尽染;漫江碧透,百舸争流。 鹰击长空,鱼翔浅底,万类霜天竞自由。 怅寥廓,问苍茫大地,谁主沉浮? 携来百侣曾游,忆往昔峥嵘岁月稠。 恰同学少年,风华正茂;书生意气,挥斥方遒。 指点江山,激扬文字,粪土当年万户侯。 曾记否,到中流击水,浪遏飞舟? 2、《诗两首》 (1)、《雨巷》戴望舒 撑着油纸伞,独自 /彷徨在悠长、悠长/又寂寥的雨巷, 我希望逢着 /一个丁香一样的 /结着愁怨的姑娘。 她是有 /丁香一样的颜色,/丁香一样的芬芳, /丁香一样的忧愁, 在雨中哀怨, /哀怨又彷徨; /她彷徨在这寂寥的雨巷, 撑着油纸伞 /像我一样, /像我一样地 /默默彳亍着 冷漠、凄清,又惆怅。 /她静默地走近/走近,又投出 太息一般的眼光,/她飘过 /像梦一般地, /像梦一般地凄婉迷茫。 像梦中飘过 /一枝丁香的, /我身旁飘过这女郎; 她静默地远了,远了,/到了颓圮的篱墙, /走尽这雨巷。 在雨的哀曲里, /消了她的颜色, /散了她的芬芳, /消散了,甚至她的 太息般的眼光, /丁香般的惆怅/撑着油纸伞,独自 /彷徨在悠长,悠长 又寂寥的雨巷, /我希望飘过 /一个丁香一样的 /结着愁怨的姑娘。 (2)、《再别康桥》徐志摩 轻轻的我走了, /正如我轻轻的来; /我轻轻的招手, /作别西天的云彩。 那河畔的金柳, /是夕阳中的新娘; /波光里的艳影, /在我的心头荡漾。 软泥上的青荇, /油油的在水底招摇; /在康河的柔波里, /我甘心做一条水草!那榆阴下的一潭, /不是清泉,是天上虹 /揉碎在浮藻间, /沉淀着彩虹似的梦。寻梦?撑一支长篙, /向青草更青处漫溯, /满载一船星辉, /在星辉斑斓里放歌。但我不能放歌, /悄悄是别离的笙箫; /夏虫也为我沉默, / 沉默是今晚的康桥!悄悄的我走了, /正如我悄悄的来;/我挥一挥衣袖, /不带走一片云彩。 4、《荆轲刺秦王》 太子及宾客知其事者,皆白衣冠以送之。至易水上,既祖,取道。高渐离击筑,荆轲和而歌,为变徵之声,士皆垂泪涕泣。又前而为歌曰:“风萧萧兮易水寒,壮士一去兮不复还!”复为慷慨羽声,士皆瞋目,发尽上指冠。于是荆轲遂就车而去,终已不顾。 5、《记念刘和珍君》鲁迅 (1)、真的猛士,敢于直面惨淡的人生,敢于正视淋漓的鲜血。这是怎样的哀痛者和幸福者?然而造化又常常为庸人设计,以时间的流驶,来洗涤旧迹,仅使留下淡红的血色和微漠的悲哀。在这淡红的血色和微漠的悲哀中,又给人暂得偷生,维持着这似人非人的世界。我不知道这样的世界何时是一个尽头!

心愿的作文700字

心愿的作文700字 在我们每个人的成长道路上都要经历许多的磨难与挫折,不同的喜怒哀乐,与此同时,也会在心灵的某个角落悄悄地生起一个小小的心愿,但它也会随着岁月的流逝而渐渐凋零,又重新生根,发芽记下来年的希望…… 范文1 每个人的心愿都各种各样,每个人都有着自己的心愿。那些美好的心愿能够实现固然是美好的,但是那些不能变为现实的心愿却不乏唯一个个美好的梦想,其中在我的记忆的闸门里,有一个愿望就是当一名科学家。 对于男孩子来说,当一名科学家可能是每个男孩子都有的愿望,但是又有多少不是三分钟热度呢?我的愿望在我的心中一直存在着,滋长着。但当上科学家也只是一个幻想罢了。 小时,我曾幻想如果科学家的行列中出现了我的名字那该是多么美好的事啊!甚至到我长大后我也曾幻想过,如果我当上科学家,我会想方设法地去制造出能改变空气的机器,让它们能够将我们所生存的环境的空气净化地一尘不染,因为现代社会空气污染太严重了。 如果我成为了一名科学家,我会让人们懂得去遵守合法的规则,因为当今社会的人们太不遵守规则了,什么“中国

式过马路”、“随地吐痰”、“乱丢垃圾”...其实都可以说成中国人大多数的真实写照了,人们不懂得如何去遵守交通规则,让当今社会秩序混乱不堪,而似乎人们对现在的社会现象—抢劫、破坏环境?恶习已经麻木了,所以我不得不制造出改变人们恶劣习惯的机器,来净化人们的思想。 假如我是一名科学家,我会奉献我的一生给地球母亲,人们给她的伤害太多太多,原来地球母亲那光滑的脸蛋已不复存在了,留下的只有破坏,地球母亲已经遍体鳞伤了,我将为了她奉献我一切的一切。 虽然当上一名科学家只是一个比较遥远的梦想,但是人们所遇的处境、地球母亲所逢的处境不是幻想,即使梦想不会实现,但这愿望是美好的!同样也是真诚的! 范文2 终于放学了,经过了一天的学习,我拖着沉重的身子走在回家的路上,刚一到家我就像一滩烂泥似的趴在了地上,用撒娇的口气叫着:“妈妈,我回来了,饭怎么还没好?” 听见我的呼唤声,妈妈就飞似的跑了过来,拎起了我那沉重的书包,把我从地上拉了起来:“哟,身上凉冰冰的,外面很冷吧,你先去做作业,我给你冲杯牛奶暖暖身子。”我极不情愿地从地上爬了起来,提过书包进了书房。 这时妈妈端着一杯暖烘烘的牛奶向我走了过来,小心翼翼的递给了我。然后轻轻的用手抚摸着我的头,笑盈盈的说:

竞选校长演讲稿无悔的选择

竞选校长演讲稿:无悔的选择尊敬的各位专家、各位评委: 大家好! 我来自中国枇杷之乡__xx县xx镇,我抽到的演讲题目是《无悔的选择》,下面是我简短的演讲: 茫茫宇宙,每颗星都有自己的位置;芸芸众生,每个人都有自己的追求。教师,太阳底下最光辉的职业,这就是我无悔的选择! 小时候,家里穷,寒冬腊月仍光着脚丫,一天,我小学班主任王淑君老师把我领到她的寝室,拿出一双布鞋送给我说:“穿上它,好好读书!你会有出息的!”顿时一股暖流流遍了全身。从此,我暗下誓言:长大了,我也要像王老师一样做一位好老师。这誓言真成了我今后的精神支柱。初中毕业那年,我义无反顾地在志愿栏填了“四川省井研师范学校”,我如愿以偿了。[] 书生意气,风华正茂的师范求学时期,我喜欢读一些与教育有关的课外书籍。读《武训传》我伤心掉泪;谈孔子我肃然起敬;说陶行知我感慨万千;评中外教育史,我豪情万丈。我深深懂得:民族的希望在少年,少年智则国智,少年强则国强。而培养少年之责任,全在我们教师!教师是支撑

民族的擎天柱! 师范毕业,我被分配到一所山村小学,面对破破烂烂的校舍,一双双渴求知识的农家子弟的眼睛,我的心都碎了。孤灯作伴,淡饭粗茶,而心中的理想之火却在熊熊燃烧。我认真钻研大纲、教材、探索教法、学法;我走村串户搞家访,挤出微薄的工资,资助贫困学生。几番风雨,风度春秋,苦心人,天不负,我教的学生在毕业会考中荣获全区前茅,我调任区少年委员,在送别的那天,我的学生浸着眼泪拉着我的手久久不松开,朴实的村民聚集在校门口给我挥手致意。此时此刻我真切地感受到了幸福的滋味。区少年委员,在老百姓心中是个“官”,的确工作环境和对自己今后的发展比做一个山村教师强百倍,但我仅仅只做了一年,坚决要求下去仍做教师,当时,真有人不理解,觉得我是不是神经出了点问题,但我自己知道,我清醒得很,我是放不下我的学生呀!领导经过一段时间的考虑终于同意了我的要求,我又回到别了一年的课堂。我回到课堂犹如把鱼放回到了大海。 从教近18年,我教过小学语文、数学,也教过地理、历史、社会、音乐;当过班主任、少先队辅导员、教研组长、教导主任、校长,无论在学校什么岗位,我都觉得很适合我,无论自己有多么不快,心中有多大的委屈,只要一站上讲台,面对一张张纯真的笑脸,心里就充满了幸福与自豪。为了教

作文范文之我被表扬了作文300

作文范文之我被表扬了作文300

我被表扬了作文300 【篇一:感谢信作文300字左右】 小学生感恩节作文300字:给老师的感恩信 摘要:我要感谢您对我的细心培育,我要感谢您给我建造的一个小小的知识宝库。还有许多许多,您都在努力的事情,我们虽没有看见,但是我们心里都知道...... 给老师的感恩信 尊敬的老师您好: 马上就要到您的节日了,我不想送您花和其他的小礼物,我只想给您一 封感恩信。老师,是您把我们这群刚刚开放的花朵长得更旺盛;老师,是您把我们这群不懂事的孩子变成一个个祖国的栋梁!老师,谢谢您! 老师,您有时表扬过我,有时也说过我,当您表扬我的时候我会非常高兴,但当您批评我的时候我却很伤心,但我不埋怨您,你是最无私的,一个人付出的辛苦与努力,您就像我的妈妈一样,爱护我们。 我要感谢您对我的细心培育,我要感谢您给我建造的一个小小的知识宝库。还有许多许多,您都在努力的事情,我们虽没有看见,但是我们心里都知道...... 老师,您就放心吧!我一定会好好学习的,不会辜负您对我们的期望!篇二:作文感谢信范文 尊敬的xx实验中学的全体师生、员工: 您们好!

您们的捐款我们如数收到,您们的爱心我们也如数收到!我们全家感谢您们!感谢您们伸出的仁爱之手,让xx同学增添了战胜病魔的勇气和力量! 感谢您们伸出的关爱之手,让我们看到了实验中学那充满生机与活力的未来!感谢您们伸出的真爱之手,让我们体验到了和谐社会里大家庭的温暖与牵挂!祝全体同学健康快乐!学习快乐! 祝实验中学领导健康快乐!事业有成! 祝实验中学老师、员工健康快乐!笑口常开! 祝实验中学年年岁岁岁岁年年硕果累累! 尊敬的各位领导、老师: 你们好! 首先,感谢***助学金对我们贫困大学生的关怀和关爱。我是***大学学生,经过十年寒窗的苦读。我心如所愿的来到了***大学——开始了我的大学之旅。 大学是人生的关键阶段。这是因为,进入大学是你终于放下高考的重担,第一次开始追逐自己的理想、兴趣。这是你离开家庭生活,第一次独立参与团体和社会生活。这是你不再单纯地学习或背诵书本上的理论知识,第一次有机会在学习理论的同时亲身实践。这是你第一次不再由父母安排生活和学习中的一切,而是有足够的自由处置生活和学习中遇到的各类问题,支配所有属于自己的时间。 大学是人生的关键阶段。这是因为,这是你一生中最后一次有机会系统性地接受教育。这是你最后一次能够全心建立你的知识基础。这可能是你最后一次可以将大段时间用于学习的人生阶段,也可能是最后一次可以拥有较高的可塑性、集中精力充实自我的成长历程。这也许是你最后一次能在相对宽容的,可以置身其中学习为人处世之道的理想环境。

人教版高中语文必修必背课文

必修1 沁园春·长沙(全文)毛泽东 独立寒秋, 湘江北去, 橘子洲头。 看万山红遍, 层林尽染, 漫江碧透, 百舸争流。 鹰击长空, 鱼翔浅底, 万类霜天竞自由。 怅寥廓, 问苍茫大地, 谁主沉浮。 携来百侣曾游, 忆往昔峥嵘岁月稠。 恰同学少年, 风华正茂, 书生意气, 挥斥方遒。 指点江山, 激扬文字, 粪土当年万户侯。 曾记否, 到中流击水, 浪遏飞舟。 雨巷(全文)戴望舒 撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望逢着 一个丁香一样地 结着愁怨的姑娘。 她是有 丁香一样的颜色, 丁香一样的芬芳, 丁香一样的忧愁, 在雨中哀怨, 哀怨又彷徨;

她彷徨在这寂寥的雨巷, 撑着油纸伞 像我一样, 像我一样地 默默彳亍着 冷漠、凄清,又惆怅。 她默默地走近, 走近,又投出 太息一般的眼光 她飘过 像梦一般地, 像梦一般地凄婉迷茫。 像梦中飘过 一枝丁香地, 我身旁飘过这个女郎; 她默默地远了,远了, 到了颓圮的篱墙, 走尽这雨巷。 在雨的哀曲里, 消了她的颜色, 散了她的芬芳, 消散了,甚至她的 太息般的眼光 丁香般的惆怅。 撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望飘过 一个丁香一样地 结着愁怨的姑娘。 再别康桥(全文)徐志摩 轻轻的我走了,正如我轻轻的来;我轻轻的招手,作别西天的云彩。 那河畔的金柳,是夕阳中的新娘;波光里的艳影,在我的心头荡漾。 软泥上的青荇,油油的在水底招摇;

在康河的柔波里,我甘心做一条水草! 那榆荫下的一潭,不是清泉, 是天上虹揉碎在浮藻间,沉淀着彩虹似的梦。 寻梦?撑一支长篙,向青草更青处漫溯, 满载一船星辉,在星辉斑斓里放歌。 但我不能放歌,悄悄是别离的笙箫; 夏虫也为我沉默,沉默是今晚的康桥。 悄悄的我走了,正如我悄悄的来; 我挥一挥衣袖,不带走一片云彩。 记念刘和珍君(二、四节)鲁迅 二 真的猛士,敢于直面惨淡的人生,敢于正视淋漓的鲜血。这是怎样的哀痛者和幸福者?然而造化又常常为庸人设计,以时间的流驶,来洗涤旧迹,仅使留下淡红的血色和微漠的悲哀。在这淡红的血色和微漠的悲哀中,又给人暂得偷生,维持着这似人非人的世界。我不知道这样的世界何时是一个尽头! 我们还在这样的世上活着;我也早觉得有写一点东西的必要了。离三月十八日也已有两星期,忘却的救主快要降临了罢,我正有写一点东西的必要了。 四 我在十八日早晨,才知道上午有群众向执政府请愿的事;下午便得到噩耗,说卫队居然开枪,死伤至数百人,而刘和珍君即在遇害者之列。但我对于这些传说,竟至于颇为怀疑。我向来是不惮以最坏的恶意,来推测中国人的,然而我还不料,也不信竟会下劣凶残到这地步。况且始终微笑着的和蔼的刘和珍君,更何至于无端在府门前喋血呢? 然而即日证明是事实了,作证的便是她自己的尸骸。还有一具,是杨德群君的。而且又证明着这不但是杀害,简直是虐杀,因为身体上还有棍棒的伤痕。 但段政府就有令,说她们是“暴徒”! 但接着就有流言,说她们是受人利用的。 惨象,已使我目不忍视了;流言,尤使我耳不忍闻。我还有什么话可说呢?我懂得衰亡民族之所以默无声息的缘由了。沉默啊,沉默啊!不在沉默中爆发,就在沉默中灭亡。

无悔的选择演讲稿

我不是一位多情的诗人,不能用漂亮的诗句去讴歌我的事业,我不是一位睿智的哲人,不能用深遂的哲理去体现我的人生价值。然而,我就是我——一个普通的小学教师。今天我要在这词汇的花园里采撷,构造我心中最美的诗篇。在理性的王国里徜佯,推演我心中最奥妙的哲理。我要用深深的思索和凝重的感情来唱出我心中最美的歌!今天我演讲的题目是:无悔的选择 是呀,春蚕选择了吐丝,蜡烛选择了燃烧,白杨选择了大地,雄鹰选择了天空。它们选择了生命的自由与奔放,它们才活得如此洒脱而有价值。而我,则选择了太阳底下最光辉的事业——人民教师。读过《教育美文100篇》后,我更想自豪地说:“这是我无悔的选择。”衣带渐宽,不悔!为伊消瘦,不悔!在无数老师合奏的这道宏大的奉献曲中,虽然我只是一个小小的音符,但是我坚信:只要我们播下种子,即使只有一颗,也是有收获的;只要我们栽培的花朵,即使只有一朵,也是最明艳的。我一生别无它求,只求在自己的工作岗位上,用无私的奉献写下两个金光闪闪的大字:无悔! 因为“无悔”所以我渴望我的小学语文课堂能够书声琅琅、议论纷纷、情意融融。因为“无悔”所以我在教育的旅途中,认识了魏书生、于永正、窦桂梅、李吉林等教育大师;因为“无悔”所以我和我的学生们在唐诗宋词中徜徉,一起领略“大谋孤烟直,长河落日圆”的雄壮;一起欣赏“会当凌绝顶,一览众山小”的豪迈气势;一起“老夫聊发少年狂。左牵黄。右擎苍。锦帽貂裘,千骑卷平冈。为报倾城随太守,亲射虎,看孙郎。一起吟诵那壮怀激烈的《满江红》怒发冲冠,凭阑处、

潇潇雨歇。抬望眼、仰天长啸,壮怀激烈。三十功名尘与土,八千里路云和月。莫等闲,白了少年头,空悲切。靖康耻,犹未雪;臣子恨,何时灭。驾长车,踏破贺兰山缺。壮志饥餐胡虏肉,笑谈渴饮匈奴血。待从头、收拾旧山河,朝天阙。——我渴望可以将唐诗宋词融入学生的血脉,这是我们民族的灵魂。 记得全国著名特级教师于漪老师曾深情的对同事们说过:“如果人的生命有一百次,并且每一次都可以自己选择职业,那么我将一百次选择人民教师——这个太阳底下最光辉的职业。”此时此刻,我也想对在座的各位同事、我的学生们说:“如果有来生,我还是选择人民教师!” 我渴望,当我到了行将就木的那一天,我可以骄傲、自豪的大声喊道:虽然我没有在天空中飞过,但我已留下痕迹。 “吃也清淡,穿也素雅,心怀淡泊,起始于辛劳,收结于平淡。”——我无悔。

我受到了表扬作文

我受到了表扬作文 篇一:我受到了表扬 记不清有多少次得到夸奖,也记不清有多少次因得到表扬而得意忘形。但是这一次妈妈对我的表扬却令我难以忘却。 “唉,真糟糕!回去又得挨批评了。”我一边走在回家的路上,一边 自言自语。真希望时间停止脚步,可老天似乎在和我作对似的,平时感觉很长的一段路,好像变得短短的,眨眼间就到了家门口。我开门后,东张西望,看看妈妈在不在家。我把考卷拿了出来,看着红色的分数,我心中忐忑不安,心想:这次我该怎么办呢?藏起来,是不诚实的行为;给妈妈看,又怕挨批评。这使我想起了以前,我也是考了很差的分数,但是我把考卷藏了起来,结果被妈妈发现后挨了一顿批评。这次会不会也挨批评呀?就在这时,门铃响了,“呀!妈妈回来了,怎么办?”就在我惊慌失措的时候,我想起了妈妈说的一句话“成绩固然重要,但好品德比成绩更重要。”于是我不管三七二十一,低

着头把考卷递到了妈妈的面前,我用余光看着妈妈的表情,原以为妈妈会皱紧眉头,狠狠的批评我一顿。但令我诧异的是,妈妈不仅没有批评我,还表扬了我的诚实:“虽然你这次没考好,但是你的诚实令我感到欣慰。看来经过上次的事你已经吸取了教训!” 被表扬的感觉真好呀! 篇二:我受到了表扬 从小到大我受到过许多表扬,每次受到表扬我都是喜滋滋的。但是有一次表扬却是苦涩的。 记得读三年级时,我期终考试失误了,导致数学成绩只得了89分。我看着那成绩单上的红红的、刺眼的89这个数,不知揉了多少次眼睛。眼睛都被我揉红了。我看着这个令我心酸心痛的数字,不敢相信这是真的。我心想;不会吧!不可能吧!我,我怎么只考了个89分呢?啊,按照约定,主科成绩达不到优秀,我暑假就去不成福建玩了?!不,不会的,我还可以补救!对了,改分!把89改成99,没错,就这么办!

我有一个小小的心愿作文700字

我有一个小小的心愿作文700字 我们的生活充满了七色阳光,但即使是在阳光普照的时候,也难免出现短暂的阴云。成长中的少年,会有一些挥之不去的烦恼。这些烦恼来自生活,来自学习,来自与同学的交往……但是,有烦恼并不可怕,关键是要正确对待它。我的学习成绩只处中上水平, 小考的时候我差点考不上。上了初中我发现自己越来越不喜欢学习了。妈妈常常说:\"你怎么不努力学习,你怎样考高中啊,你初中就差点靠不上,你如果考不上,你以后的工作怎么办啊?你这初中文凭那有人要你,你又没有什么特长,你去干什么呢啊?现在读书才出息啊,孩子读书是为了你自己啊,不是为父母啊。\"是啊,现在不读书哪有出人投地的一天啊,现在连有些大学生都没有工作啊。想起这个问题我心就烦。在我12岁的时候,就想到自己的未来,爸爸希望我能上清华北大,为家里争光、而妈妈却希望我能找一份好工作、外公外婆呢,就只希望我天天过得开心。我从小就被培养各种兴趣班,如:

钢琴、绘画、书写、钢琴等等。但不知道哪一项将注定我的命运! 随着我一天天的长大,有很多的烦恼围绕着我。在学校里发生的一些事情,大多不愿与家长谈论,因为只要一谈,他们就要长篇大论,不准我插一句话,而且我的耳朵也受不了那么多话的进出,所以我不愿让耳朵受罪,就不想与家长说啰!然而,我就把一切想说的话,每天都写在一个本子上,也就是日记。写完后,让自己欣赏,自己来解决自己的事情。开始进行的很好,可是渐渐的,我觉得家长们看我的眼神很不自然,似乎我有一些事情瞒着他们。那天,我放学回家,写完作业后,按照常规,去拿日记本,忽然,我发现日记本被移动过,我顿时火冒三丈,一想便知道一定是他们。我走出卧室,大声问他们是不是看过我的日记?他们反而正大光明的说,了解我的全部,是他们的义务。 我受不了了,我只是想拥有自己的一片蓝天,你们为什么这样自私的夺走它,就是想要了解我吗?我回到房间里,觉得自己已经什么都没有了,唉!为什么家长在我们长大后总想了解我们,不想让我

相关文档
最新文档