Hardware Accelerated Interactive Vector Field Visualization

Hardware Accelerated Interactive Vector Field Visualization
Hardware Accelerated Interactive Vector Field Visualization

EUROGRAPHICS2002/G.Drettakis and H.-P.Seidel

(Guest Editors)

Volume21(2002),Number3

Hardware Accelerated Interactive Vector Field Visualization:

A level of detail approach

Udeepta Bordoloi and Han-Wei Shen

E-mail:{bordoloi,hwshen}@https://www.360docs.net/doc/5215445574.html,

Department of Computer and Information Sciences,The Ohio State University,Columbus,Ohio,USA

Abstract

This paper presents an interactive global visualization technique for dense vector?elds using levels of detail.We

introduce a novel scheme which combines an error-controlled hierarchical approach and hardware acceleration

to produce high resolution visualizations at interactive https://www.360docs.net/doc/5215445574.html,ers can control the trade-off between computation

time and image quality,producing visualizations amenable for situations ranging from high frame-rate previewing

to accurate https://www.360docs.net/doc/5215445574.html,e of hardware texture mapping allows the user to interactively zoom in and explore the

data,and also to con?gure various texture parameters to change the look and feel of the visualization.We are able

to achieve sub-second rates for dense LIC-like visualizations with resolutions in the order of a million pixels for

data of similar dimensions.

Categories and Subject Descriptors(according to ACM CCS):I.3[Computer Graphics]:Applications

Keywords:Vector Field Visualization,Flow Visualization,Level of Detail,Line Integral Convolution,LIC

1.Introduction

Study,and hence,visualization of vector?elds is an impor-tant aspect in many diverse disciplines.Visualization tech-niques for vector?elds can be classi?ed into local techniques and global ones.Examples of local techniques include par-ticle traces,streamlines,pathlines,and streaklines,which are primarily used for interactive data exploration.Global techniques such as line integral convolution(LIC)1and spot noise2,on the other hand,are effective in providing global views of very dense vector?elds.These techniques are clas-si?ed as global techniques because directional information at every point of the?eld are displayed,and the only limita-tion is the pixel resolution.The price for the rich information content of the global methods,however,is their high compu-tational cost,which makes interactive exploration dif?cult. We are concerned with the problem of interactive visu-alization of very dense two-dimensional vector?elds with ?exible level of detail controls.Even though the processor speeds have increased signi?cantly in recent years,a global visualization technique for dense vector?elds which realizes interactive rates still eludes us.Likewise,although large for-mat graphics displays that consist of tens of millions of pix-els have become increasingly available,our ability to gener-ate very high resolution visualization images at an interac-tive rate is clearly lacking.Furthermore,currently very few global vector?eld visualization techniques allow the user to freely zoom into the?eld at various levels of detail.This ca-pability is often needed when the size of the original vector ?eld exceeds the graphics display resolution.Finally,most of the existing global techniques do not allow a?exible con-trol of the output quality to facilitate either a fast preview or a detailed analysis of the underlying vector?eld.

In this paper we introduce an interactive global vector ?eld visualization technique aiming to tackle the above prob-lems.The performance goal of our method is set to produce dense LIC-like visualizations with resolutions in the order of a million pixels within a fraction of second.This is accom-plished in part by utilizing graphics hardware acceleration, which allows us to increase the output resolution without linearly increasing the computation time.Additionally,our algorithm takes into account both a user-speci?ed error tol-erance and image resolution dependent criteria to adaptively select different levels of detail for different regions of the vector?eld.Moreover,the texture-based nature of our algo-

c The Eurographics Association an

d Blackwell Publishers2002.Published by Blackwell Publishers,108Cowley Road,Oxford OX41JF,UK and350Main Street,Malden,MA 02148,USA.

rithm allows the user to con?gure various texture properties to control the?nal appearance of the visualization.This fea-ture provides extra?exibility to represent the directional in-formation,as well as other quantities in various visual forms. This paper is organized as follows.We?rst discuss previ-ous relevant work(section2)and present the level-of-detail algorithm for steady state?ow(section3).We discuss the level of detail selection process(section4).The hardware acceleration issues are are presented next(section5),fol-lowed by the results and discussion(section6).

2.Related Work

Texture based methods are the most popular techniques for visualization of dense vector?elds.Spot noise,proposed by Van Wijk2,convolved a random texture along straight lines parallel to the vector?eld.Another popular technique is the Line Integral Convolution(LIC).Originally devel-oped by Cabral and Leedom1,it uses a white noise tex-ture and a vector?eld as its input,and results in an out-put image which is of the same dimensions as the vector ?eld.Stalling and Hege3introduced an optimized version by exploiting coherency along streamlines.Their method, called‘Fast LIC’,uses cubic Hermite-interpolation of the advected streamlines,and optionally uses a directional gra-dient?lter to increase image contrast.Forssell4applied the LIC algorithm to curvilinear grids.Okada and Kao5used post-?ltering to increase image contrast and highlight?ow features.Forssell4and Shen et al.6extended the technique to unsteady?ow?elds.Verma et al.7developed an algo-rithm called‘Pseudo LIC’(PLIC)which uses texture map-ping of streamlines to produce LIC-like images of a vec-tor?eld.They start with a regular grid overlaying the vec-tor?eld grid,but they compute streamlines only over grid points uniformly sub-sampled from the original grid.Jobard and Lefer8applied texture mapping techniques to create an-imations of arbitrary density for unsteady?ow.

Level-of-detail algorithms have been applied in various forms to almost all areas of visualization,including?ow visualization9,10.Cabral and Leedom11used an adaptive quad-subdivision meshing scheme in which the quads are recursively subdivided if the integral of the local vector?eld curvature is greater than a given threshold.We use a simi-lar subdivision for our level-of-detail approach.Depending on the error-threshold,our algorithm can produce visualiza-tions spanning the whole range from high-?delity images to preview-quality(high frame-rate)images.Being resolution independent,it allows the user to freely zoom in and out of the vector?eld at interactive rates.Unlike many variations of LIC which require post-processing steps like equalization,a second pass of LIC,or high-pass?ltering5,our method does not need any extra steps.Moreover,changing textures and/or the texture-mapping parameters can allow us to produce a wide range of static representations and animations.3.Algorithm Overview

In this section we present an interactive algorithm for global visualization of dense vector?elds.The interactivity is achieved by level-of-detail computations and hardware ac-celeration.Level-of-detail approximations make it possible to save varying amounts of processing time in different re-gions based on the local complexity of the underlying vec-tor?eld,thus providing a?exible run-time user-controlled trade-off between quality and execution time.Hardware ac-celeration allows us to compute dense LIC-like textures more ef?ciently than line integral https://www.360docs.net/doc/5215445574.html,e of graph-ics hardware makes it possible to display the vector?eld at very high resolutions while maintaining the high texture fre-quency and low computation times.

To perform level-of-detail estimation,we de?ne an er-ror measure over the vector?eld domain(section4.1).As a preprocessing step,we then construct a branch-on-need (BONO)12quadtree which serves as a hierarchical data structure for the error measure.The error associated with a node of the quadtree represents the error when only one rep-resentative streamline is computed for all the points within the entire region corresponding to the node.At run time,the quadtree is traversed and the error measure stored in each node is compared against a user-speci?ed https://www.360docs.net/doc/5215445574.html,ing the level-of-detail traversal we are able to selectively reduce the number of streamlines required to generate the?ow tex-tures.In section 4.2,we discuss how the quadtree traversal is controlled based on the resolution of the display. Hardware accelerated texture mapping is used to gener-ate a dense image from the scattered streamlines output by the traversal phase of the algorithm.During the above men-tioned quadtree traversal,quad blocks of different levels cor-responding to different spatial sizes are generated.For each region,a streamline is originated from its center.A quadri-lateral strip,with a width equal to the diagonal of the region, is constructed following the streamline.Henceforth we will refer to this quadrilateral strip as a‘stream-patch’and to the streamline as‘medial streamline’.The stream-patch is then texture mapped with precomputed LIC images of a straight vector?eld.The texture coordinates for the quad-strip are derived by constructing a corresponding quad-strip at a ran-dom position in texture space.Figure1shows the construc-tion and texture mapping of a stream-patch,and details are presented in section5.1.The stream-patches for different re-gions are blended together(section5.2).Each stream-patch extends beyond the originating region,covering many re-gions lying on its path.If a region has already been drawn over by one or more adjacent regions’stream-patches,it is no longer necessary to render the stream-patch for the re-gion.In section5.3,we discuss the use of the stencil buffer in graphics hardware to skip such regions.

In the following sections,we?rst explain the level-of-detail selection criteria,and then the hardware acceleration features of the algorithm.

c The Eurographics Association an

d Blackwell Publishers2002.

(a)(b)(c)(d)

Figure1:Construction of the stream-patch:(a)the medial

streamline,(b)the quadrilateral strip constructed on the

streamline,(c)texture coordinates corresponding to the ver-

tices of the quad-strip,and the texture parameters a,b,(d)

the quad-strip after it has been texture mapped.

4.Level-of-Detail Selection

The level-of-detail selection process involves two distinct

phases:(1)construction of a quadtree(for level-of-detail

errors)as a preprocessing step,and(2)resolution depen-

dent traversal of the quadtree at run-time with user-speci?ed

thresholds.Below,we elaborate on each of these stages.

4.1.Error Measures

An‘ideal’error metric for a level-of-detail representation

should give a measure of how(in)correct the vector?eld

approximation will be compared to the original?eld data.

The textures produced by the algorithm provide information

through unquanti?able visual stimulus.Therefore it is dif-

?cult to formally de?ne the‘correctness’of the visualiza-

tion produced.The?delity of illustration of the vector?eld

should be measured not from the values of the pixels of the

image produced,but from the visual effect that the texture

pattern has on the user.We use an error measure which tries

to capture the difference between the texture direction at a

point,which is the approximated vector direction,and the

actual vector?eld direction at the point.

Consider the texture pattern at a point which is not on

the medial streamline,but falls within the stream-patch.For

each quad of the quad-strip,the texture direction is paral-

lel to the medial-streamline segment within that quad.So,

within each quad,we are approximating the vector?eld as a

?eld parallel to the medial-streamline.The error can thus be

quanti?ed by the angular difference between the directions

of the medial streamline and the actual vector?eld at the

sample point.

Since each quadtree node originates a stream-patch that

will travel outside the node boundary,the error measure as-

sociated with a particular quadtree node should consider all

the points that are within the footprint of the

stream-patch.

Figure2:Multi-level error for the vector?eld shown in?g-

ure7.The images shown correspond to the following three

levels of the error quadtree:16x16(top-left),8x8(bottom-

left)and4x4(right)square regions.The error value range is

mapped to[0.0,1.0],with0.0being the darkest and1.0the

brightest.

To do this,for each quad in the quad-strip,we?nd the angu-

lar difference between the directions of the medial stream-

line segment and each of the vector?eld’s grid points within

the quad.The error for a stream-patch originated from a

quadtree node is then de?ned as the maximum angular dif-

ference for the grid points across all quads of that stream-

patch.Since the error would depend on the length of the

stream-patch(which is user-con?gurable),we take a con-

servative approach and calculate the errors assuming large

values of length.Note that since taking the maximum angu-

lar deviation as the error always keeps the error below the

user-speci?ed tolerances,it can be very sensitive to noise.

For noisy data,taking a weighted average might prove help-

ful.The level-of-detail approximation errors for a particular

level are computed by constructing stream-patches for all the

quadtree nodes for that level and then computing the errors

for each stream-patch.The error values for the vector?eld in

?gure7are shown in?gure2,and those for the vector?eld

in?gure8are in?gure3.The error values are normalized to

the range[0.0,1.0]to make them user friendly.

It can be seen from the images in?gure2that in any par-

ticular level,the regions around the critical points(the three

vortices and two saddle points)have the highest error values.

The critical points do not need any special handling as they

would be represented by stream-patches of the?nest level-

of-detail allowed by the display resolution(section4.2).Be-

cause of high curvature around critical points,use of thicker

stream-patches would have resulted in artifacts due to self-

intersections.The errors gradually fall off as we move away

from the critical points,and hence would result in progres-

sively coarser level-of-detail stream-patches.Also,errors for

any particular region increase across levels.So,a high error

threshold will permit a coarse level-of-detail approximation.

c The Eurographics Association an

d Blackwell Publishers2002.

Figure3:Multi-level error for the vector?eld shown in?g-ure8.The images shown correspond to the levels8x8(top-left),4x4(top-right)and2x2(bottom)of the error quadtree. The error value range is mapped to[0.0,1.0],with0.0being the darkest and1.0the brightest.

Similarly,in?gure3,the regions near the vortices have the highest errors.Since no run-time parameters are required for the error calculations,this error quadtree needs to be gener-ated only once for the entire life of the dataset.At run-time, it can be read in along with the dataset.

4.2.Resolution dependent level-of-detail selection

In this section we describe the run-time aspects of the level-of-detail selection phase,which requires the user to input a threshold for acceptable error.We shall call the ratio of the vector?eld resolution to the display resolution the resolu-tion ratio,k.Consider,for example,an x v×y v vector?eld dataset,and suppose our visualization window resolution is x w×y w.Let

x v=k×x w;y v=k×y w(1) Note that in an interactive setting,the value of k changes if the user zooms in/out.If the display window has a res-olution not smaller than that of the vector?eld,i.e,k≤1, the quadtree is traversed in a depth?rst manner,and stream-patches are rendered for quad blocks satisfying the error con-dition.In cases the display resolution is smaller,i.e.,k>1, the quadtree traversal is performed using resolution depen-dent tests in addition to the error threshold test.The reso-lution dependent controls in traversal are motivated by two goals:(1)we want to limit the quadtree traversal to the min-imum block size which occupies more than one pixel on the display,and(2)we want to avoid a potential popping effect caused by a changing k which causes the above mentioned minimum block size to go up(or down)by one level.

For the discussion below,let us assume that at a particular instant m>k>m2,where m2=2i,i={1,2,...}.Since m2×m2 blocks occupy a display area less than the size of one pixel, we limit the quadtree traversal to the m×m block level.Now, if the user zooms in,k becomes progressively smaller,and at some point of time we will have k=m2.At this instant,the minimum displayable block size becomes m2.A lot of m×m blocks(those that do not satisfy the error threshold)will be eligible to change the level-of-detail to m2×m2.If allowed to do so,it will result in a popping effect.To avoid this,we allow only a very small number of m×m blocks to change their level-of-detail to m2×m2.As the user keeps zooming in, k continues to decrease,and we gradually allow more and more blocks to change their level-of-detail.If the user con-tinues to zoom in,by the time k becomes equal to m4,all the m×m blocks will have changed into m2×m2blocks.This gradual change in the level-of-detail is achieved by modify-ing the error threshold test for m×m blocks.The user sup-plied error threshold is scaled to a high value when k=m2. As k decreases,we decrease the scaled threshold,such that by the time k=m4,the threshold reaches its original user supplied value.

5.Hardware Acceleration

After the quadtree traversal phase has resolved the levels-of-detail for different parts of the vector?eld,stream-patches are constructed for each region corresponding to its level-of-detail.They are then texture mapped and rendered using graphics hardware(section5.1).The different sized stream-patches need to be blended together to construct a smooth image(section5.2).An OpenGL stencil buffer optimization is used to further reduce the number of medial streamlines computed(sec5.3).

5.1.Resolution Independence

For each quadtree node that the traversal phase returns,a stream-patch is constructed using the medial streamline for that node.The texture coordinates for the stream-patch are derived by constructing a corresponding quad-strip at a ran-dom position in the texture space(?gure1(c)).The param-eters a(width)and b(height)of the corresponding texture space quad-strip determine the frequency of the texture on the texture mapped stream-patch.

Since we are essentially rendering textured polygons,the output image can easily be rendered at any resolution.When the window size is the same as the vector?eld size,that is,the resolution ratio k is unity(equation(1)),the stream-patches have a width equal to the diagonal of the quadtree nodes they correspond to.When k is changed(e.g.,when the user interactively zooms in/out,or when the window size is changed),the width of the stream-patches at each level-of-detail is modulated by1k.Simultaneously,a and b need to be changed to re?ect the change in k.Otherwise,when we zoom out,the texture shrinks leading to severe alias-

c The Eurographics Association an

d Blackwell Publishers2002.

ing(?gure9(b)).Note that we cannot use anti-aliasing tech-niques like mipmaps as the texture will get blurred and all directional information will be lost.Similarly,for zoom ins, the texture is stretched,and we lose the granularity(?gure 10(b)).For high zoom ins,or for high resolution large for-mat displays,the‘constant texture frequency’feature of our algorithm proves very useful.When the display resolution is?ner than the vector?eld resolution(k<1),we are left with sparsely distributed streamlines.But due to the high texture frequency,the?nal image gives the perception of dense streamlines,which can be considered to be interpo-

lated from the sparse original streamlines.

5.2.Blending Stream-patches

To ensure that the?nal image shows no noticeable transi-tion between adjacent simpli?ed regions,a smooth blend-ing of neighboring stream-patches needs to be performed.

A uniform blending(averaging)will result in two undesir-able properties.Firstly,the resultant image will loose con-trast.If many stream-patches are rendered over a pixel,its value tends to the middle of the gray scale range.This is specially unsuitable for our algorithm,as the loss of contrast will be non-uniform across the image due to the non-uniform nature of the level-of-detail decomposition of the vector?eld.Secondly,the correctness of the?nal im-age(up to the user-supplied threshold)will be compro-mised.This happens because a stream-patch correspond-ing to a coarser level-of-detail will have a non-zero ef-fect on its neighboring regions,some of which might cor-respond to?ner level-of-details.To prevent coarser level-of-detail stream-patches from affecting the pixel values of nearby?ner level-of-detail regions,we use a coarser-level-of-detail to?ner-level-of-detail rendering order,combined with the opacity function shown in?gure4.The OpenGL blending function used is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).Because we use an opacity value of unity at the central part of the stream-patch, the pixels covered by this part are completely overwritten with texture values of this patch,thus erasing previous val-ues due to coarser level-of-detail stream-patches.Moreover, this opacity function results in a uniform contrast across the image,even though the number of stream-patches drawn and blended over different parts of the image varies a lot.To reduce aliasing patterns,we jitter the advection length of medial streamlines in either direction and adjust the opac-ity function accordingly.

To ensure a smooth transition between adjacent patches, we also vary the stream-patch opacities in the direction per-pendicular to the medial-streamline using a similar opacity function.This is done at a cost of increased rendering time since we add one quad strip on each side of the stream-patch so that the opacities can be varied laterally.In our implemen-tation,the user can turn the lateral opacity variation off for high-frame rate preview quality requirements.

opacity (α)

Arc?length of

the medial

streamline,

reparametrized

to lie between

?1 and 1, with

the originating

point at 0.

1

(a)(b)(c) Figure4:Opacity function:(a)the value ofαover the length of the stream-patch,(b)the patch without blending,and(c) the stream-patch after blending.

5.3.Reducing Streamline Redundancy

Because stream-patches continue beyond their originating regions,each stream-patch would cover many pixels,albeit with different opacities.From our experiments,we found that a pixel goes to an opacity value of unity with the?rst few stream-patches that are rendered over it.Thus,if a pixel has been drawn over by a few stream-patches,then we do not need to texture map any more stream-patches for this pixel. We use this idea to reduce the number of stream-patches that need to be rendered,and hence the number of medial stream-lines that need to be computed.However,among the many stream-patches that may been rendered over this pixel,only those which are of the same or?ner level of detail as this pixel’s level-of-detail should be counted.

We avoid doing this‘minimum number of renderings’test in software by using the OpenGL Stencil buffer to keep track of how many stream-patches have been drawn over a pixel. When GL_STENCIL_TEST is enabled,the stencil buffer comparison is performed for each pixel being rendered to. The stencil test is con?gured using the following OpenGL functions:

?glStencilFunc(GL_ALW AYS,0,0):Speci?es the compar-ison function used for the stencil test.For our purpose, every pixel needs to pass the stencil test(GL_ALW AYS).?glStencilOp(GL_KEEP,GL_INCR,GL_INCR):Sets the actions on the stencil buffer for the following three sce-narios:the stencil test fails,stencil test passes but depth test fails,and both tests pass.In our case,the stencil test always passes,so we increment(GL_INCR)the value of the stencil buffer at the pixel being tested by one. Before starting to compute the stream-patch for a region,we read the stencil buffer values for all the pixels corresponding to that region.If all the pixels have been rendered to a mini-mum number of times,we skip the region.If not,we render

c The Eurographics Association an

d Blackwell Publishers2002.

the stream-patch and the stencil buffer values of all the pix-els which this patch covers are updated by OpenGL.While going through our coarser-to-?ner level drawing order(as mentioned in section5.2),we clear the stencil buffer each time we?nish one level.Otherwise,a?ner level-of-detail region which has been drawn over by coarser level-of-detail stream-patches might be skipped because each pixel in the region has already satis?ed the threshold of minimum num-ber of renderings.This will violate the error-criteria for the region.

The stencil buffer read operation is an expensive one in terms of time.If done for every stream-patch,it would take so much time that we would be better off not using it.For our implementation,we read the stencil-buffer once every few hundred stream-patches.The values are reused till we read in the buffer once again.

6.Results and Discussion

We present the performance and various visual results of our algorithm implemented in C++using FLTK for the GUI. The timings are taken on a1.7GHz Pentium with an nVidia Quadro video card.The results show that the image quality remains reasonable even when error thresholds are increased to achieve high speedups.Moreover,various aspects of the visualizations are interactively con?gurable,as shown by the different visuals presented in this section.

6.1.Range of Image Quality and Speed

We present results for a simulated dataset of vortices(and saddles)with dimensions of1000x1000,and for a real 573x288dataset of ocean winds.The error calculation times for the vortices dataset was148seconds,and the ocean dataset required20seconds.For each dataset,two images are shown:one with tight error limits,and the other with relaxed bounds.For comparison,the images produced by FastLIC are shown in?gures5,6.A fourth order adaptive Runge-Kutte integration is used,with same parameters for both FastLIC and our algorithm.The medial streamlines in our algorithm were advected to the same length as the con-volution length used for FastLIC.

Figures7show the results of our level-of-detail algorithm for the vortices dataset rendered for an1000x1000display window.Figure7(a)was generated using a low error thresh-old in0.81seconds,while?gure7(b)was produced us-ing a high error threshold https://www.360docs.net/doc/5215445574.html,pared to FastLIC,we achieve speed-ups of15-30depending on the error threshold for level-of-detail selection.Figures8(a)and (b)are outputs for the ocean dataset,rendered for a display window of same dimensions.A low error threshold was used for?gure8(a);it was relaxed for?gure8(b).The times taken were0.32and0.16seconds respectively,for speed-ups of 5.9-11.9compared to FastLIC.There is no visible difference in image quality in either dataset for the low error thresholds.

dataset FastLIC LOD(low error)LOD(high error)

Vortices12.32s0.81s(15.2)0.39s(31.6) Ocean1.9s0.32s(5.9)0.16s(11.9) Table1:Timing results for the1000x1000Vortices dataset and the573x288ocean winds dataset.The timings reported are for FastLIC,and our algorithm for two level-of-detail error thresholds.The speed-ups for the level-of-detail algo-rithm with respect to FastLIC are shown in parentheses.No post-processing has been done in any of these runs.The im-ages for these runs are in?gures5,6,7and

8.

Figure5:LIC image of the1000x1000vortices dataset in 12.32s,using convolution length of

60

Figure6:LIC image of the573x288ocean wind dataset in 1.9s,using convolution length30.

For the higher thresholds(?gure7(b)and8(b)),the differ-ences are very minute and not readily noticeable.

c The Eurographics Association an

d Blackwell Publishers2002.

(a)(b)

Figure 7:Vortices dataset using:(a)low error threshold in 0.81s,(b)high error threshold in 0.39s.The image quality remains good even for high error

thresholds.

(a)(b)

Figure 8:Ocean dataset using:(a)low error threshold in 0.32s,(b)high error threshold in 0.19s.The image quality remains good even for high error thresholds.

6.2.Interactive Exploration

The level-of-detail features allow users to visualize the vec-tor ?eld at a wide range of resolutions.The algorithm au-tomatically adjusts the display parameters to render an una-liased image of a large dataset for display on a small screen.Figures 9(a)and (b)show the results for visualizations on low resolution displays,with and without resolution adjusted parameters.At the other extreme,the vector data can be ren-dered at very high resolutions when displayed on large for-mat graphics displays,or when viewed at high zoom fac-tors.The hardware texture mapping allows us to change res-olutions at interactive frame rates,thus allowing the user to freely zoom into and out of the dataset.Figure 10(a)shows

the central vortex of the vortices dataset at a magni?cation

factor of 20x.The texture mapping parameters are changed dynamically,so that the texture does not get stretched.For comparison,?gure 10(b)shows the same rendering when the texture parameters are not adjusted.6.3.Scalar Variable Information

Information about a scalar variable de?ned on the vector ?eld can be superimposed on the directional representation of the vector ?eld.One technique for that is to modulate the texture frequency of the ?nal image based on the local val-ues of the scalar.Kiu 13and Banks applied this scheme to LIC images by using noise textures of different frequencies.

c

The Eurographics Association and Blackwell Publishers 2002.

(a)(b)

Figure 9:Zoom out:The vortices dataset rendered for a dis-play window one-fourth its size:(a)The textures coordinates are adjusted to prevent aliasing,and the ?nest displayable level-of-detail is adjusted to match display resolution.(b)the image gets aliased if scaled down without adjusting the tex-ture

parameters.

(a)(b)

Figure 10:Zoom in:The central vortex of the dataset in ?g-ure 7is shown at a magni?cation factor of 20x:(a)The tex-ture coordinates are adjusted automatically to compensate for the magni?cation factor,thus maintaining the original texture frequency,(b)the texture loses granularity if the co-ordinates are not adjusted.

We follow a different approach of using multiple textures to achieve the same goal.

We start with an ordered set of precomputed textures,in which some texture property varies monotonically from one extreme of the set to the other.For the example presented in ?gure 11,the property is the length of the LIC directional pattern.That is,textures of short LIC patterns (produced by small convolution lengths)are at one end of the set and those with long patterns (large convolution lengths)are at the other.For each quad in a stream-patch,different textures are selected as the source for texture mapping based on the value of the scalar,much like mipmaps are used depending on screen area.However,use of different textures in adjacent quads of the quad-strip destroys the directional continuity of the texture mapped strip.As a way around this,in a pro-cess analogous to blending two adjacent levels of

mipmaps,

Figure 11:Illustration of use of multiple textures to show a scalar variable on the vector ?eld,velocity in this example.The velocity is high in parts around the vortices,and is low at the lower left and upper right corners.

each quad is rendered twice with textures adjacent in the or-dered set of texture bank.The opacities of the two textures rendered are weighted so that the resultant texture smoothly varies as the scalar value changes.The net effect is a texture property which varies smoothly relative to a scalar value.The detail of the scalar representation in a particular image is limited to the level-of-detail approximation that is used for generating the image.In some situations it may be desired to control the error in the scalar value illustration.Then the quadtree of errors would need to be constructed using both the angle error (section 4.1)and the error in the scalar value.Figure 11is generated using the multi-texturing method to show the magnitude of velocity of the vortices dataset.The short patterns indicate lower velocity,whereas the LIC pat-tern is long in areas of high velocity.

We can also apply the multiple texture technique to pre-serve constant texture frequency for vector ?elds on grids which are not https://www.360docs.net/doc/5215445574.html,ing the method presented by Forsell 4,a vector ?eld representation is generated on a reg-ular grid.This image is then texture mapped onto the actual surface in physical space.During this mapping,the regular grid cells used for computation are transformed to different sizes in physical space,which can result in the stretching or pinching of the texture.To prevent this,we use textures of varying frequencies for each quad subject to the area the quad occupies in physical space.Figure 12(a)shows the re-sult when a usual constant frequency texture is mapped on a grid in which the cells grow progressively smaller from bot-c

The Eurographics Association and Blackwell Publishers 2002.

(a)(b)(c)

Figure12:Multiple texture rendering to show antialiasing for a grid which has small cells at the top and large ones at the bottom:(a)constant frequency image mapped to the grid,showing aliasing,(b)multi-frequency texture generated using multiple textures,(c)the texture in(b)mapped to the grid,without aliasing.

tom to top,resulting in aliasing.Figure12(b)is the image

generated by our algorithm using multiple frequency tex-

tures subject to the grid cell area.The texture frequency de-

creases from bottom to top,so that when it is mapped to the

grid(?gure12(c)),there is no aliasing.

6.4.Streamline Textures

The level-of-detail technique can be applied with a vari-

ety of textures to get diverse visualizations.If we use a

sparse texture,it gives the output an appearance of stream-

line representation.This is a popular method of?ow visual-

ization14,15,16.

Figures13(a-c)are generated using a stroke-like texture

to give the image a hand drawn feeling.The stroke in the

texture is oriented by making the tail wider than the head.

This adds directional information to the image.Unlike pre-

vious textures,this texture has an alpha component which is

non-zero only over the oriented stroke.Figure13(a)is gen-

erated by constraining the level-of-detail approximation to a

single level-of-detail.This creates an uniform distribution of

streamlines.The stencil buffer option(section5.3)is turned

on to limit the streamlines from crowding one another.Fig-

ures13(b-c)have been rendered using the level-of-detail ap-

proximations.Since the level-of-detail is?ner near the sad-

dle points and the vortices,more streamlines are drawn near

these parts.Due to the relative lack of streamlines in the parts

of the vector?eld which have low errors,the more complex

parts of the vector?eld stand out to the viewer.

6.5.Animation

Animation of the vector?eld images is achieved by trans-

lating the texture coordinates of each stream-patch from

one frame to another.The only additional constraint is that

a cyclic texture be used for texture mapping the stream-

patches.For example,the texture in?gure1(c)needs to be

cyclic along the vertical direction.For each time step,the

image is rendered by vertically shifting(downwards)the tex-

ture coordinates compared to the previous frame.Since the

texture is cyclic,if the texture coordinates move past the

bottom edge of the texture,they reappear at the top edge.

To show different speeds,the texture coordinates for each

stream-patch are moved by an amount proportional to the

velocity of the seed-point of the patch.

7.Conclusion and Future Work

Using a level-of-detail framework,we are able to reduce the

computation times for a dense visualization of vector?elds.

Coupled with hardware acceleration,the algorithm gener-

ates high quality visualizations at interactive rates for large

datasets and large displays.The resolution independence and

user-controlled image quality features make this algorithm

extremely useful for vector data exploration,which till now

is being done using probing techniques like streamlines,par-

ticle advection etc.

Acknowledgement

This work was supported by The Ohio State University Re-

search Foundation Seed Grant,and in part by NSF Grant

ACR-0118915and NASA Grant NCC2-1261.We would

like to thank Ravi Samtaney and Zhanping Liu for provid-

ing the test data sets and Dr.Roger Craw?s and Dr.David

Kao for useful comments.

c The Eurographics Association an

d Blackwell Publishers2002.

(a)(b)(c)

Figure13:Stroke-like textures used to generate streamline representation:(a)Uniform placement of streamlines using constant level-of-detail,(b),(c)streamlines generated using the error quadtree traversal.The level-of-detail algorithm generates a non-uniform distribution of streamlines,giving more detail to higher error regions and vice versa.

References

1. B.Cabral and C.Leedom.Imaging vector?elds us-

ing line integral convolution.In Proceedings of SIG-GRAPH93,pages263–270.ACM SIGGRAPH,1993.

2.J.van Wijk.Spot noise:Texture synthesis for data vi-

https://www.360docs.net/doc/5215445574.html,puter Graphics,25(4):309–318,1991.

3. D.Stalling and H.-C.Hege.Fast and resolution inde-

pendent line integral convolution.In Proceedings of SIGGRAPH’95,pages249–256.ACM SIGGRAPH, 1995.

4.L.K.Forssell and https://www.360docs.net/doc/5215445574.html,ing line integral

convolution for?ow visualization:Curvilinear grids, variable-speed animation,and unsteady?ows.IEEE Transactions on Visualization and Computer Graphics, 1(2):133–141,1995.

5. A.Okada and D.L.Kao.Enhanced line integral con-

volution with?ow feature detection.In Proceedings of IS&T/SPIE Electronic Imaging’97,pages206–217, 1997.

6.H.-W.Shen and D.L Kao.A new line integral con-

volution algorithm for visualizing time-varying?ow ?elds.IEEE Transactions on Visualization and Com-puter Graphics,4(2),1998.

7.V.Verma,D.Kao,and A.Pang.Plic:Bridging the gap

between streamlines and lic.In Proceedings of Visu-alization’99,pages341–348.IEEE Computer Society Press,1999.

8. B.Jobard and W.Lefer.Unsteady?ow visualization

by animating evenly-spaced https://www.360docs.net/doc/5215445574.html,puter Graphics Forum(Proceedings of Eurographics2000), 19(3),2000.9.H.Garcke,T.Preu?er,M.Rumpf, A.Telea,

U.Weikard,and J.van Wijk.A continuous cluster-ing method for vector?elds.In Proceedings of Visu-alization’00,pages351–358.IEEE Computer Society Press,2000.

10.X.Tricoche,G.Scheuermann,and H.Hagen.Contin-

uous topology simpli?cation of planar vector?elds.In Proceedings of Visualization’01,pages159–166.IEEE Computer Society Press,2001.

11. B.Cabral and C.Leedom.Highly parallel vector visu-

alization using line integral convolution.In Proceed-ings of Seventh Siam Conference On Parallel Process-ing for Scienti?c Computing,pages803–807,1995. 12.J.Wilhelms and A.Van Gelder.Octrees for faster iso-

surface generation.ACM Transactions on Graphics, 11(3):201–227,1992.

13.M.-H.Kiu and D.Banks.Multi-frequency noise for

LIC.In Proceedings of Visualization’96,pages121–126.IEEE Computer Society Press,1996.

14.G.Turk and D.Banks.Image-guided streamline place-

ment.In Proceedings of SIGGRAPH’96,pages453–460.ACM SIGGRAPH,1996.

15. B.Jobard and W.Lefer.Creating evenly-spaced stream-

lines of arbitrary density.In Proceedings of the eight Eurographics Workshop on visualization in scienti?c computing,pages57–66,1997.

16.R.Wegenkittl and E.Gr?ller.Fast oriented line integral

convolution for vector?eld visualization via the inter-net.In Proceedings of Visualization’97,pages309–316,1997.

c The Eurographics Association an

d Blackwell Publishers2002.

STC89C52单片机详细介绍

STC89C52是一种带8K字节闪烁可编程可檫除只读存储器(FPEROM-Flash Programable and Erasable Read Only Memory )的低电压,高性能COMOS8的微处理器,俗称单片机。该器件采用ATMEL 搞密度非易失存储器制造技术制造,与工业标准的MCS-51指令集和输出管脚相兼容。 单片机总控制电路如下图4—1: 图4—1单片机总控制电路 1.时钟电路 STC89C52内部有一个用于构成振荡器的高增益反相放大器,引

脚RXD和TXD分别是此放大器的输入端和输出端。时钟可以由内部方式产生或外部方式产生。内部方式的时钟电路如图4—2(a) 所示,在RXD和TXD引脚上外接定时元件,内部振荡器就产生自激振荡。定时元件通常采用石英晶体和电容组成的并联谐振回路。晶体振荡频率可以在1.2~12MHz之间选择,电容值在5~30pF之间选择,电容值的大小可对频率起微调的作用。 外部方式的时钟电路如图4—2(b)所示,RXD接地,TXD接外部振荡器。对外部振荡信号无特殊要求,只要求保证脉冲宽度,一般采用频率低于12MHz的方波信号。片内时钟发生器把振荡频率两分频,产生一个两相时钟P1和P2,供单片机使用。 示,RXD接地,TXD接外部振荡器。对外部振荡信号无特殊要求,只要求保证脉冲宽度,一般采用频率低于12MHz的方波信号。片内时钟发生器把振荡频率两分频,产生一个两相时钟P1和P2,供单片机使用。 RXD接地,TXD接外部振荡器。对外部振荡信号无特殊要求,只要求保证脉冲宽度,一般采用频率低于12MHz的方波信号。片内时钟发生器把振荡频率两分频,产生一个两相时钟P1和P2,供单片机使用。

标准物质与标准样品

1 概述 在国际上标准物质和标准样品英文名称均为“Reference Materials”,由 ISO/REMCO组织负责这一工作。在我国计量系统将“Reference Materials”叫为“标准物质”,而标准化系统叫为“标准样品”。实际上二者有很多相同之处,同时也有一些微小差异(见表1)。 2 标准物质的分类与分级 2.1 标准物质的分类 根据中华人民共和国计量法的子法—标准物质管理办法(1987年7月10日国家计量局发布)中第二条之规定:用于统一量值的标准物质,包括化学成分标准物质、物理特性与物理化学特性测量标准物质和工程技术特性测量标准物质。按其标准物质的属性和应用领域可分成十三大类,即:

l 钢铁成分分析标准物质; l 有色金属及金属中气体成分分析标准物质; l 建材成分分析标准物质; l 核材料成分分析与放射性测量标准物质; l 高分子材料特性测量标准物质; l 化工产品成分分析标准物质; l 地质矿产成分分析标准物质; l 环境化学分析标准物质; l 临床化学分析与药品成分分析标准物质; l 食品成分分析标准物质; l 煤炭石油成分分析和物理特性测量标准物质; l 工程技术特性测量标准物质; l 物理特性与物理化学特性测量标准物质。 2.2 标准物质的分级 我国将标准物质分为一级与二级,它们都符合“有证标准物质”的定义。 2.2.1 一级标准物质 一级标准物质是用绝对测量法或两种以上不同原理的准确可靠的方法定值,若只有一种定值方法可采取多个实验室合作定值。它的不确定度具有国内最高水平,均匀性良好,在不确定度范围之内,并且稳定性在一年以上,具有符合标准物质技术规范要求的包装形式。一级标准物质由国务院计量行政部门批准、颁布并授权生产,它的代号是以国家级标准物质的汉语拼音中“Guo”“Biao”“Wu”三个字的字头“GBW”表示。 2.2.2 二级标准物质 二级标准物质是用与一级标准物质进行比较测量的方法或一级标准物质的定值方

关于国内工程保理业务介绍

关于国内工程保理业务介绍 根据我行最新国内保理业务管理办法,现将本业务作如下介绍: 一、本业务涉及到的几个定义 (一)国内保理业务是指境内卖方将其向境内买方销售货物或提供服务所产生的应收账款转让给我行,由我行为卖方提供应收账款管理、保理融资及为买方提供信用风险担保的综合性金融业务。其中,我行为卖方提供的应收账款管理和保理融资服务为卖方保理,为买方提供的信用风险担保服务为买方保理。 工程保理是指工程承包商或施工方将从事工程建设服务形成 的应收账款转让给卖方保理行,由保理行提供综合性保理服务。(二)卖方(供应商、债权人):对所提供的货物或服务出具发票 的当事方,即保理业务的申请人。 (三)买方(采购商、债务人):对由提供货物或服务所产生的应 收账款负有付款责任的当事方。 (四)应收账款:本办法所称应收账款指卖方或权利人因提供货物、服务或设施而获得的要求买方或义务人付款的权利,包括现有的金钱债权及其产生的收益,包括但不限于应收账款本金、利息、违约金、损害赔偿金,以及担保权利、保险权益等所有主债权、从债权以及与主债权相关的其他权益,但不包括因票据或其他有价证券而产生的付款请求权。 (四)应收账款发票实有金额:是指发票金额扣除销售方已回笼货款后的余额。

(五)保理监管专户:指卖方保理行为卖方开立的用于办理保理项下应收账款收支业务的专用账户。该账户用途包括:收取保理项下应收账款;收取其他用以归还我行保理融资的款项;支付超过保理融资余额以外的应收账款等。卖方不能通过应收账款专户以网上银行或自助设备等形式开展任何业务。 二、保理业务分类 (一)按是否对保理融资具有追索权,我行国内保理分为有追索权保理和无追索权保理。 有追索权保理是指卖方保理行向卖方提供保理项下融资后,若买方在约定期限内不能足额偿付应收账款,卖方保理行有权按照合同约定向卖方追索未偿还融资款的保理业务;无追索权保理是指卖方保理行向卖方提供保理项下融资后,若买方因财务或资信原因在约定期限内不能足额偿付应收账款,卖方保理行无权向卖方追索未偿还融资款。 三、应收账款的范围、条件及转让方式 (一)可办理国内保理业务的应收账款范围: 1、因向企事业法人销售商品、提供服务、设施而形成的应收 账款; 2、由地市级(含)以上政府的采购部门统一组织的政府采购 行为而形成的应收账款; 3、由军队军级(含)以上单位的采购部门统一组织的军队采 购行为而形成的应收账款;

国际标准物质介绍

一、NIST标准物质: 1.资源介绍 美国NIST是世界上开展标准物质研制最早的机构。NIST的前身为美国国家标准局,成立于1901年,是设在美国商务部科技管理局下的非管理机构,负责国家层面上的计量基础研究。NIST在研究权威化学及物理测量技术的同时,还提供以NIST为商标的各类标准参考物质(SRM,Standard Reference Material),该商标以使用权威测量方法和技术而闻名。目前,NIST共提供1300多种标准参考物质,形成了世界领先的、较为完善的标准物质体系,在保证美国国内化学测量量值的有效溯源及分析测量结果的可靠性方面发挥了重要的支撑作用。总体来讲,美国NIST与其它美国商业标准物质生产者的关系是:NIST作为国家计量院将其研究重点放在高端标准物质及测量标准的研究上,建立国家的高端量值溯源体系,商业标准物质将其量值通过NTRM、校准等各种形式溯源至NIST,以改善标准物质的供应状况,满足不同层次用户对标准物质日益增长的需求。 NIST在采纳由ISO制定的国际计量学通用基本术语中对CRM、RM的定义外,又将其标准物质分为SRM(标准参考物质)、RM(标准物质)及NTRM(可溯源至NIST的标准物质)。对它们分别解释如下: NIST SRM:符合NIST规定的附加定值准则,带有证书(物理特性类)或分析证书(化学成分类)并在证书上提供其定值结果及相关使用信息的有证标准物质(CRM)。该类标准物质必须含有认定值(Certified value),同时可提供参考值和信息值; NIST RM:由NIST发布的、带有研究报告而不是证书的标准物质。该类标准物质除了符合ISO对RM的定义,还可能符合ISO对CRM的定义。该类标准物质必须含有参考值,同时可提供信息值; NTRM:可以通过很好建立的溯源链溯源至NIST化学测量标准的商业标准物质。溯源链的建立必须符合由NIST确定的系列标准和草案。通过管理授权,NTRM也可等价于CRM。NTRM模式目前在气体分析用标准物质和光度分析用标准物质方面得到了很好的应用; 在标准物质所提供的特性量值方面,有以下三类: NIST认定值:具有最高准确度、所有已知或可疑偏差来源均已经过充分研究或

STC89C52单片机用户手册

STC89C52RC单片机介绍 STC89C52RC单片机是宏晶科技推出的新一代高速/低功耗/超强抗干扰的单片机,指令代码完全兼容传统8051单片机,12时钟/机器周期和6时钟/机器周期可以任意选择。 主要特性如下: 增强型8051单片机,6时钟/机器周期和12时钟/机器周期可以任意选择,指令代码完全兼容传统8051. 工作电压:~(5V单片机)/~(3V单片机) 工作频率范围:0~40MHz,相当于普通8051的0~80MHz,实际工作频率可达48MHz 用户应用程序空间为8K字节 片上集成512字节RAM 通用I/O口(32个),复位后为:P1/P2/P3/P4是准双向口/弱上拉,P0口是漏极开路输出,作为总线扩展用时,不用加上拉电阻,作为 I/O口用时,需加上拉电阻。 ISP(在系统可编程)/IAP(在应用可编程),无需专用编程器,无需专用仿真器,可通过串口(RxD/,TxD/)直接下载用户程序,数秒 即可完成一片 具有EEPROM功能 具有看门狗功能 共3个16位定时器/计数器。即定时器T0、T1、T2 外部中断4路,下降沿中断或低电平触发电路,Power Down模式可由外部中断低电平触发中断方式唤醒 通用异步串行口(UART),还可用定时器软件实现多个UART 工作温度范围:-40~+85℃(工业级)/0~75℃(商业级) PDIP封装 STC89C52RC单片机的工作模式 掉电模式:典型功耗<μA,可由外部中断唤醒,中断返回后,继续执行

原程序 空闲模式:典型功耗2mA 正常工作模式:典型功耗4Ma~7mA 掉电模式可由外部中断唤醒,适用于水表、气表等电池供电系统及便携设备 STC89C52RC引脚图 STC89C52RC引脚功能说明 VCC(40引脚):电源电压 VSS(20引脚):接地 P0端口(~,39~32引脚):P0口是一个漏极开路的8位双向I/O口。作为输出端口,每个引脚能驱动8个TTL负载,对端口P0写入“1”时,可以作为高阻抗输入。

标准物质GBWGBWEBWGSB的区别

标准物质GBW、GBW(E)、BW、GSB的区别 1 概述 在国际上标准物质和标准样品英文名称均为“Reference Materials”,由 ISO/REMCO组织负责这一工作。在我国计量系统将“Reference Materials”叫为“标准物质”,而标准化系统叫为“标准样品”。实际上二者有很多相同之处,同时也有一些微小差异(见表1)。 CAS号(CAS Registry Number或称CAS Number, CAS Rn, CAS #),又称CAS登录号,是某种物质(化合物、高分子材料、生物序列(Biological sequences)、混合物或合金)的唯一的数字识别号码。美国化学会的下设组织化学文摘服务社 (Chemical Abstracts Service, CAS)负责为每一种出现在文献中的物质分配一个CAS号,其目的是为了避免化学物质有多种名称的麻烦,使数据库的检索更为方便。 2 标准物质的分类与分级 2.1 标准物质的分类 根据中华人民共和国计量法的子法—标准物质管理办法(1987年7月10日国家计量局发布)中第二条之规定:用于统一量值的标准物质,包括化学成分标准物质、物理特性与物理化学特性测量标准物质和工程技术特性测量标准物质。按其标准物质的属性和应用领域可分成十三大类, 2.2 标准物质的分级 我国将标准物质分为一级与二级,它们都符合“有证标准物质”的定义。 2.2.1 一级标准物质

一级标准物质又称国家一级标准物质,是由国家权威机构审定的标准物质。例如,美国国家标准局的SRM标准物质,英国的BAS标准物质,联邦德国的BAM标准物质,我国的GBW标准物质等。一级标准物质一般用绝对测量法或其它准确可靠的方法确定物质的含量,准确定为本国最高水平。我国国家技术监督局规定的国家一级标准物质应具备的基本条件为:①用绝对测量法或两种以上不同原理的准确可靠的测量方法进行定值,也可由多个实验室用准确可靠的方法协作定值;②定值的准确度应具有国内最高水平;③应具有国家统一编号的标准物质证书;④稳定时间应在1年以上;⑤均匀性应保证在定值的精度范围内; ⑥具有规定的合格包装形式。一般说来,一级标准物质应具有0.3-1%的准确度。一级标准物质由国务院计量行政部门批准、颁布并授权生产,它的代号是以国家级标准物质的汉语拼音中“Guo”“Biao”“Wu”三个字的字头“GBW”表示。 2.2.2 二级标准物质 二级标准物质是用与一级标准物质进行比较测量的方法或一级标准物质的定值方法定值,其不确定度和均匀性未达到一级标准物质的水平,稳定性在半年以上,能满足一般测量的需要,包装形式符合标准物质技术规范的要求。二级标准物质由国务院计量行政部门批准、颁布并授权生产,它的代号是以国家级标准物质的汉语拼音中“Guo”“Biao”“Wu”三个字的字头“GBW”加上二级的汉语拼音中“Er”字的字头“E”并以小括号括起来――GBW(E)。 GBW和GBW(E)是总局批准的国家级标准物质,属于有证标准物质。BW是中国计量院学报标准物质,未通过总局批准,没有取得有证标准物质号. 2.3 标准物质的编号 一种标准物质对应一个编号。当该标准物质停止生产或停止使用时,该编号不再用于其他标准物质,该标准物质恢复生产和使用仍启用原编号。 准物质目录编辑顺序一致)。第三位数是标准物质的小类号,每大类标准物质分为1-9个小类。第四-五位是同一小类标准物质中按审批的时间先后顺序排列的顺序号。最后一位是标准物质的产生批号,用英文小写字母表示,排于标准物质编号的最后一位,生产的第一批标准物质用a表示,第二批用b表示,批号顺序与英文字母顺序一致。GBW07101a超基性岩,表示地质类中岩石小类第一个批准的标准物质,首批产品。 如:GBW02102b铁黄铜,表示有色金属类中铜合金小类第二个批准的标准物质,第二批产品。 2.3.2 二级标准物质 二级标准物质的编号是以二级标准物质代号“GBW(E)”冠于编号前部,编号的前两位数是标准物质的大类号,第三、四、五、六位数为该大类标准物质的顺序号。生产批号同一级标准物质生产批号,如GBW(E)110007a表示煤炭石油成分分析和物理特性测量标准物质类的第7顺序号,第一批生产的煤炭物理性质和化学成分分析标准物质。 2.4 标准物质的分类编号 标准物质的分类编号,见表2。

STC89C52RC单片机手册范本

STC89C52单片机用户手册 [键入作者姓名] [选取日期]

STC89C52RC单片机介绍 STC89C52RC单片机是宏晶科技推出的新一代高速/低功耗/超强抗干扰的单片机,指令代码完全兼容传统8051单片机,12时钟/机器周期和6时钟/机器周期可以任意选择。 主要特性如下: 1.增强型8051单片机,6时钟/机器周期和12时钟/机器周期可以任 意选择,指令代码完全兼容传统8051. 2.工作电压:5.5V~ 3.3V(5V单片机)/3.8V~2.0V(3V单片机) 3.工作频率范围:0~40MHz,相当于普通8051的0~80MHz,实 际工作频率可达48MHz 4.用户应用程序空间为8K字节 5.片上集成512字节RAM 6.通用I/O口(32个),复位后为:P1/P2/P3/P4是准双向口/弱上 拉,P0口是漏极开路输出,作为总线扩展用时,不用加上拉电阻, 作为I/O口用时,需加上拉电阻。 7.ISP(在系统可编程)/IAP(在应用可编程),无需专用编程器,无 需专用仿真器,可通过串口(RxD/P3.0,TxD/P3.1)直接下载用 户程序,数秒即可完成一片 8.具有EEPROM功能 9.具有看门狗功能 10.共3个16位定时器/计数器。即定时器T0、T1、T2 11.外部中断4路,下降沿中断或低电平触发电路,Power Down模式

可由外部中断低电平触发中断方式唤醒 12.通用异步串行口(UART),还可用定时器软件实现多个UART 13.工作温度范围:-40~+85℃(工业级)/0~75℃(商业级) 14.PDIP封装 STC89C52RC单片机的工作模式 ●掉电模式:典型功耗<0.1μA,可由外部中断唤醒,中断返回后,继续执行原 程序 ●空闲模式:典型功耗2mA ●正常工作模式:典型功耗4Ma~7mA ●掉电模式可由外部中断唤醒,适用于水表、气表等电池供电系统及便携设备

STC89C52单片机学习开发板介绍

STC89C52单片机学习开发板介绍 全套配置: 1 .全新增强STC89C5 2 1个【RAM512字节比AT89S52多256个字节FLASH8K】 2 .优质USB数据线 1条【只需此线就能完成供电、通信、烧录程序、仿真等功能,简洁方便实验,不需要USB 转串口和串口线,所有电脑都适用】 3 .八位排线 4条【最多可带4个8*8 LED点阵,从而组合玩16*16的LED点阵】 4 .单P杜邦线 8条【方便接LED点阵等】 5 .红色短路帽 19个【已装在开发箱板上面,短路帽都是各功能的接口,方便取用】 6 .实验时钟电池座及电池 1PCS 7 .DVD光盘 1张【光盘具体内容请看页面下方,光盘资料截图】 8 .全新多功能折叠箱抗压抗摔经久耐磨 1个【市场没有卖,专用保护您爱板的折叠式箱子,所有配件都可以放入】 9 .8*8(红+绿)双色点阵模块 1片【可以玩各种各样的图片和文字,两种颜色变换显示】 10.全新真彩屏SD卡集成模块 1个【请注意:不包含SD卡,需要自己另外配】 晶振【1个方便您做实验用】 12.全新高速高矩进口步进电机 1个【价格元/个】 13.全新直流电机 1个【价值元/ 个】 14.全新红外接收头 1个【价格元/ 个】 15.全新红外遥控器(送纽扣电池) 1个【价格元/个】 16.全新18B20温度检测 1个【价格元/只】 17.光敏热敏模块 1个(已经集成在板子上)【新增功能】 液晶屏 1个 配件参照图:

温馨提示:四点关键介绍,这对您今后学习51是很有帮助的) 1.板子上各模块是否独立市场上现在很多实验板,绝大部分都没有采用模块化设计,所有的元器件密 密麻麻的挤在一块小板上,各个模块之间PCB布线连接,看上去不用接排线,方便了使用者,事实上是为了降低硬件成本,难以解决各个模块之间的互相干扰,除了自带的例程之外,几乎无法再做任何扩展,更谈不上自由组合发挥了,这样对于后继的学习非常不利。几年前的实验板,基本上都是这种结构的。可见这种设计是非常过时和陈旧的,有很多弊端,即便价格再便宜也不值得选购。 HC6800是采用最新设计理念,实验板各功能模块完全独立,互不干扰,功能模块之间用排线快速连接。 一方面可以锻炼动手能力,同时可加强初学者对实验板硬件的认识,熟悉电路,快速入门;另一方面,因为各功能模块均独立设计,将来大家学习到更高级的AVR,PIC,甚至ARM的时候,都只

标准物质标准样品概述

标准物质/标准样品概述 1.定义简述:标准物质/标准样品是具有准确量值的测量标准,它在化学测量,生物测量,工程测量与物理测量领域得到了广泛的应用。按照“国际通用计量学基本术语”和“国际标准化组织指南30”标准物质定义: (1) 标准物质(Reference Material,RM)具有一种或多种足够均匀和很好确定了的特性值,用以校准设备,评价测量方法或给材料赋值的材料或物质。 (2) 有证标准物质(Certified Reference Material CRM)附有证书的标准物质,其一种或多种特性值用建立了溯源性的程序确定,使之可溯源到准确复现的用于表示该特性值的计量单位,而且每个标准值都附有给定置信水平的不确定度。 (3) 基准标准物质(Primary Reference Material PRM) 2.标准物质/标准样品的用途简述 (1) 校准仪器:常用的光谱、色谱等仪器在使用前需要使用标准物质对仪器进行检定,检查仪器的各项指标,如灵敏度、分辨率、稳定性等是否达到要求。在使用时用标准物质绘制标准曲线校准仪器,测试过程中修正分析结果。 (2) 评价方法:用标准物质考察一些分析方法的可靠性。 (3) 质量控制:分析过程中同时分析控制样品,通过控制样品的分析结果考察操作过程的正确性 3.标准物质/标准样品分类简述: (1) 化学标样:形态:屑状;销售状态:单瓶销售;用途:用于化学方法分析,例如:CS分析仪,湿法化学分析。 (2) 光谱套标标样:形态:块状;销售状态:成套销售(多以5块或5块以上作为一套标样);用途:用于光谱仪器工作曲线的建立。又称仪器标准化标样。 (3) 光谱高低标标样:形态:块状;销售状态:成套销售(多以2块或3块作为一套标样);用途:在光谱仪开机后,使用工作曲线之前,对工作曲线的飘移进行校正,保证工作曲线的工作状态。国外对于这种类似作用的标样成为:SUS标准样品,SUS标样特点是成分均匀稳定,但不做精确定值,不建议使用在单点校正方面,只用于仪器工作曲线飘移的校正。 (4) 光谱单块校正标样:形态:块状;销售状态:单块出售;用途:在工作曲线校正完成的前提下,针对某一种未知产品成分进行校正,又称仪器类型标准化标样。 4.标样牌号与标样编号及成分的关系 牌号:代表某一类产品的名字,是一个成分范围,每个牌号都有其对应的成分范围,在这个成分牌号下面有许多标样。 标样编号:是标样的名字,标样编号与每一个标样是一一对应的。

国家实用标准物质技术要求规范

国家药品标准物质技术规 第一章总则 第一条为保证国家药品标准物质的质量,规药品标准物质的研制工作,根据《国家药品标准物质管理办法》,制定本技术规。 第二条本技术规适用于中国食品药品检定研究院(以下简称中检院)研究、制备、标定、审核、供应的国家药品标准物质。 第三条国家药品标准物质系指供药品质量标准中理化测试及生物方法试验用,具有确定特性,用以校准设备、评价测量方法或给供试药品定性或赋值的物质。 (一)理化检测用国家药品标准物质系指用于药品质量标准中物理和化学测试用,具有确定特性,用以鉴别、检查、含量测定、校准设备的对照品,按用途分为下列四类: 1. 含量测定用化学对照品:系指具有确定的量值,用于测定药品中特定成分含量的标准物质。 2. 鉴别或杂质检查用化学对照品:系指具有特定化学性质,用于鉴别或确定药品某些特定成分的标准物质。 3. 对照药材/对照提取物:系指用于鉴别中药材或中成药中某一类成分或组分的对照物质。 4. 校正仪器/系统适用性试验用对照品:系指具有特定

化学性质用于校正检测仪器或供系统适用性实验用的标准物质。 (二)生物检测用国家药品标准物质系指用于生物制品效价、活性、含量测定或其特性鉴别、检查的生物标准品或生物参考物质,可分为生物标准品和生物参考品。 1.生物标准品系指用国际生物标准品标定的,或由我国自行研制的(尚无国际生物标准品者)用于定量测定某一制品效价或毒性的标准物质,其生物学活性以国际单位(IU)或以单位(U)表示。 2.生物参考品系指用国际生物参考品标定的,或由我国自行研制的(尚无国际生物参考品者)用于微生物(或其产物)的定性鉴定或疾病诊断的生物试剂、生物材料或特异性抗血清;或指用于定量检测某些制品的生物效价的参考物质,如用于麻疹活疫苗滴度或类毒素絮状单位测定的参考品,其效价以特定活性单位表示,不以国际单位(IU)表示。 第二章国家药品标准物质的制备 第四条在建立新的国家药品标准物质时,研制部门应提交研制申请,标准物质管理处负责评估研制的必要性。新增标准物质应遵循适用性、代表性与易获得性的原则,研制申请获得批准后研制部门方可进行制备与标定。 第五条除特殊情况外,理化检测用国家药品标准物质

STC89C52RC单片机特点

STC89C52RC 单片机介绍 STC89C52RC 单片机是宏晶科技推出的新一代高速/低功耗/超强抗干扰的单片机,指令代码完全兼容传统8051 单片机,12 时钟/机器周期和 6 时钟/机器周期可以任意选择。 主要特性如下: 1. 增强型8051 单片机,6 时钟/机器周期和12 时钟/机器周期可以任意选择,指令代码完全兼容传统 8051. 2. 工作电压:5.5V~ 3.3V<5V 单片机)/3.8V~2.0V<3V 单片机) 3. 工作频率范围:0~40MHz,相当于普通 8051 的 0~80MHz,实际工作频率可达 48MHz 4. 用户应用程序空间为 8K 字节 5. 片上集成 512 字节 RAM 6. 通用I/O 口<32 个)复位后为:,P1/P2/P3/P4 是准双向口/弱上拉,P0 口是漏极开路输出,作为总线扩展用时,不用加上拉电阻,作为I/O 口用时,需加上拉电阻。 7. ISP<在系统可编程)/IAP<在应用可编程),无需专用编程器,无需专用仿真器,可通过串口

STC89C52单片机用户手册

STC89C52F单片机介绍 STC89C52F单片机是宏晶科技推出的新一代高速 /低功耗/超强抗干扰的单片机,指令代码完全兼容传统8051单片机,12时钟/机器周期和6时钟/机器周期可以任意选择。 主要特性如下: * 增强型8051单片机,6时钟/机器周期和12时钟/机器周期可以任意选择,指令代码完全兼容传统8051. * 工作电压:5.5V?3.3V (5V单片机)/3.8V?2.0V (3V单片机) * 工作频率范围:0?40MHz相当于普通8051的0?80MHz实际工作频率可达48MHz *用户应用程序空间为8K字节 * 片上集成512字节RAM * 通用I/O 口(32个),复位后为:P1/P2/P3/P4是准双向口 /弱上拉,P0 口是漏极开路输出,作为总线扩展用时,不用加上拉电阻,作为I/O 口 用时,需加上拉电阻。 * ISP (在系统可编程)/IAP (在应用可编程),无需专用编程器,无需专用仿真器,可通过串口( RxD/P3.0,TxD/P3.1 )直接下载用户程序,数秒 即可完成一片 * 具有 EEPROM能 *具有看门狗功能 * 共3个16位定时器/计数器。即定时器T0、T1、T2 * 外部中断4路,下降沿中断或低电平触发电路,Power Down模式可由外部中断低电平触发中断方式唤醒 * 通用异步串行口( UART,还可用定时器软件实现多个 UART * 工作温度范围:-40?+85C(工业级)/0?75C(商业级) * PDIP封装 STC89C52F单片机的工作模式 *掉电模式:典型功耗<0.1吩,可由外部中断唤醒,中断返回后,继续执行原程序

标准物质特性

标准物质的特性 标准物质最主要的是三大基本特性:均匀性、稳定性、溯源性构成了标准物质三个基本要素,其具体特性有: 1.量具作用:标准物质可以作为标准计量的量具,进行化学计量的 量值传递。 2.特性量值的复现性:每一种标准物质都具有一定的化学成分或物 理特性,保存和复现这些特性量值与物理的性质有关,与物质的数量和形状无关。 3.自身的消耗性:标准物质不同于技术标准、计量器具,是实物标 准,在进行比对和量值传递过程中要逐渐消耗。 4.标准物质品种多:物质的多种性和化学测量过程中的复杂性决定 了标准物质品种多,就冶金标准物质就达几百种,同一种元素的组分就可跨越几个数量级。 5.比对性:标准物质大多采用绝对法比对标准值,常采用几个、十 几个实验室共同比对的方法来确定标准物质的标准值。高一级标准物质可以作为低一级标准物质的比对参照物,标准物质都是作为“比对参照物”发挥其标准的作用。 6.特定的管理要求:标准物质其种类不同,对储存、运输、保管和 使用都有不同的特点要求,才能保证标准物质的标准作用和标准值的准确度,否则就会降低和失去标准物质的标准作用。 7.可溯源性:其溯源性是通过具有规定的不确定度的连续比较链, 使得测量结果或标准的量值能够与规定的参考基准,通常是国家

基准或国际基准联系起来的特性,标准物质作为实现准确一致的测量标准,在实际测量中,用不同级别的标准物质,按准确度由低到高逐级进行量值追溯,直到国际基本单位,这一过程称为量值的“溯源过程”。反之,从国际基本单位逐级由高到低进行量值传递,至实际工作中的现场应用,这一过程称为量值的“传递过程”。通过标准物质进行量值的传递和追溯,构成了一个完整的量值传递和溯源体系。

标准物质的申报

标准物质研制报告编写规则 1. 范围 本规范规定了国家标准物质研制报告的编写要求、内容和格式,适用于申报国家一级、二级标准物质定级评审的研制报告。 2. 引用文献 JJF1005-2005《标准物质常用术语及定义》 JJG1006-1994《一级标准物质技术规范》 JJF1071-2000 《国家计量校准规范编写规则》 使用本规范时,应注意使用上述引用文献的现行有效版本。 3. 术语和定义 3.1 标准物质(RM)Reference material (RM) 具有一种或多种足够均匀和很好地确定了的特性,用以校准测量装置、评价测量方法或给材料赋值的一种材料或物质。 3.2 有证标准物质(CRM)Certified reference material (CRM) 附有证书的标准物质,其一种或多种特性量值用建立了溯源性的程序确定,使之可溯源到准确复现的表示该特性值的测量单位,每一种鉴定的特性量值都附有给定置信水平的不确定度。 注:在我国,有证标准物质必须经过国家计量行政部门的审批、颁布。 3.3 定值Characterization 对与标准物质预期用途有关的一个或多个物理、化学、生物或工程技术等方面的特性量值的测定。 3.4 均匀性Homogeneity 与物质的一种或多种特性相关的具有相同结构或组成的状态。通过测量取自不同包装单元(如:瓶、包等)或取自同一包装单元的、特定大小的样品,测量结果落在规定不确定度范围内,则可认为标准物质对指定的特性量是均匀的。 3.5 稳定性Stability 在特定的时间范围和贮存条件下,标准物质的特性量值保持在规定范围内的能力。 3.6 溯源性Traceability 通过一条具有规定不确定度的不间断的比较链,使测量结果或测量标准的值能够与规定的参考标准,通常是与国家测量标准或国际测量标准联系起来的特性。 4. 报告编写要求 4.1一般要求 标准物质研制报告(以下简称“报告”)是描述标准物质研制的全过程,并评价结果的重要技术文件,在标准物质的定级评审时,作为技术依据提交给相关评审机构,因此,报告应提供标准物质研制过程和数据分析的充分信息。 研制者应将研制工作中采用的方法、技术路线和创造性工作体现在报告中,写出研制的特色。 报告应作为标准物质研究的重要技术档案保存。 4.1.1报告的内容应科学、完整、易读及数据准确。

保理业务说明

保理业务 保理是指卖方、供应商或出口商与保理商之间存在的一种契约关系。根据该契约,卖方、供应商或出口商将其现在或将来的基于其与买方(债务人)订立的货物销售或服务合同所产生的应收账款转让给保理商,由保理商为其提供贸易融资、销售分户账管理、应收账款的催收、信用风险控制与坏账担保等服务中的至少两项。 目录 1发展历程 2详细情况 3业务分类 ?有追索权的保理 ?无追索权的保理 ?明保理 ?暗保理 ?折扣保理 ?到期保理 4业务费用 5保理业务流程 1发展历程编辑 近年来,无论国内贸易还是国际贸易,赊销结算方式日渐盛行,以国际贸易为例,信用证的使用率已经降至16%,在发达国家已降至10%以下,赊销基本上取代了信用证成为主流结算方式。在赊销贸易下,企业对应收账款的管理和融资需求正是保理业务发展的基础。 近年来随着国际贸易竞争的日益激烈,国际贸易买方市场逐渐形成。对进口商不利的信用证结算的比例逐年下降,赊销日益盛行。由于保理业务能够很好地解决赊销中出口商面临的资金占压和进口商信用风险的问题,因而在欧美、东南亚等地日渐流行。在世界各地发展迅速。据统计,98年全球保理业务量已达5000亿美元。 2详细情况编辑

国际统一私法协会《国际保理公约》对保理的定义 保理是指卖方/供应商/出口商与保理商间存在一种契约关系。根据该契约,卖方/供应商/出口商将其现在或将来的基于其与买方(债务人)订立的货物销售/服务合同所产生的应收帐款转让给保理商,由保理商为其提供下列服务中的至少两项: 贸易融资 销售分户账管理 销售分户账是出口商与债务人(进口商)交易的记录。在保理业务中,出口商可将其管理权授予保理公司,从而可集中力量进行生产、经营管理和销售,并减少了相应的财务管理人员和办公设备,从而缩小了办公占用面积。在卖方叙做保理业务后,保理商会根据卖方的要求,定期/不定期向其提供关于应收帐款的回收情况、逾期帐款情况、信用额度变化情况、对帐单等各种财务和统计报表,协助卖方进行销售管理。 应收帐款的催收 保理商一般有专业人员和专职律师进行帐款追收。保理商会根据应收帐款逾期的时间采取信函通知、打电话、上门催款直至采取法律手段。 信用风险控制与坏账担保 卖方与保理商签订保理协议后,保理商会为债务人核定一个信用额度,并且在协议执行过程中,根据债务人资信情况的变化对信用额度进行调整。对于卖方在核准信用额度内的发货所产生的应收帐款,保理商提供100%的坏帐担保。 3业务分类 在实际的运用中,保理业务有多种不同的操作方式。一般可以分为:有追索权和无追索权的保理;明保理和暗保理;折扣保理和到期保理 有追索权的保理 有追索权的保理是指供应商将应收账款的债权转让银行(即保理商),供应商在得到款项之后,如果购货商拒绝付款或无力付款,保理商有权向供应商进行追索,要求偿还预付的货币资金。当前银行出于谨慎性原则考虑,为了减少日后可能发生的损失,通常情况下会为客户提供有追索权的保理。 无追索权的保理 无追索权的保理则相反,是由保理商独自承担购货商拒绝付款或无力付款的风险。供应商在与保理商开展了保理业务之后就等于将全部的风险转嫁给了银行。因为风险过大,银行

标准物质的概念及作用

标准物质的概念及作用 标准物质(或参考物质)是具有一种或多种足够均匀和很好的确定了特性,用以校准测量装置、评价测量方法或材料赋值的一种材料或物质。其作用一是作为校准物质用于仪器的定度。因为化学分析仪器一般都是按相对测量方法设计的,所以在使用前或使用时必须用标准物质进行定度或制备“标准曲线”。例如化学分析用;二是作为已知物质,用以评价测量方法。当测量工作用不同的方法或不同的仪器进行时,已知物质可以有助于对新方法和新仪器所测出的结果进行可靠程度的判断。例如混凝土回弹仪率定用钢砧、测力计等;三是作为控制物质,与待测物质同时进行分析。当标准物质得到的分析结果与证书给出的量值在规定限度内一致时,证明待测物质的分析结果是可信的。例如水泥细度和比表面积标准粉等。

标准物质的管理 标准物质的定义: 标准物质(RM)是具有一种或多种足够均匀和很好的确定了特性,用以校准测量装置、评价测量方法或给材料赋值的一种材料或物质。 注:标准物质可以是纯的或混合的气体、液体或固体。例如,标准粘度计的水,量热法中作为热容量校准物的蓝宝石、化学分析校准用的溶液。有证标准物质(CRM)是附有认定证书的标准物质,其一种或多种特性值用建立了溯源性的程序确定,使之可溯源到准确复现的表示该特性值的测量单位,每一种认定的特性量值都附有给定置信水平的不确定度。注:有证标准物质一般成批制备,其特性值是通过对代表整批物质的样品进行测量而确定,并具有规定的不确定度。 标准物质的显著特征: 1、用于测量目的; 2、具有量值的准确性; 3、其量值大的准确性可溯源到国家有关计量基准。 标准物质的重要作用: 标准物质可以是固态、液态或气态。主要用于校准测量仪器,评价测量方法,确定材料或产品的特性量值,在量值传递和保证测量一致性方面有下列重要作用:1、在时间和空间上进行量值传递;2、保证部分国际单位制中的基本单位和导出单位的复现;3、复现和传递某些工程特性量和物理、物理化学特性量;4、在分析测量(包括纯度测量和化学成分测量)中应用相应的标准物质,可大大提高分析结果的可靠性;5、在产品质量保证中确保出具数据的准确性、公正性和权威性等。

STC89C52单片机介绍

STC89C52 单片机介绍: 单片机是指一个集成在一块芯片上的完整计算机系统。尽管他的大部分功能集成在一块小芯片上,但是它具有一个完整计算机所需要的大部分部件:CPU、内存、内部和外部总线系统,目前大部分还会具有外存。同时集成诸如通讯接口、定时器,实时时钟等外围设备。而现在最强大的单片机系统甚至可以将声音、图像、网络、复杂的输入输出系统集成在一块芯片上。 单片机也被称为微控制器(Microcontroler),是因为它最早被用在工业控制领域。单片机由芯片内仅有CPU的专用处理器发展而来。最早的设计理念是通过将大量外围设备和CPU集成在一个芯片中,使计算机系统更小,更容易集成进复杂的而对提及要求严格的控制设备当中。INTEL的Z80是最早按照这种思想设计出的处理器,从此以后,单片机和专用处理器的发展便分道扬镳。 早期的单片机都是8位或4位的。其中最成功的是INTEL的8031,因为简单可靠而性能不错获得了很大的好评。此后在8031上发展出了MCS51系列单片机系统。基于这一系统的单片机系统直到现在还在广泛使用。随着工业控制领域要求的提高,开始出现了16位单片机,但因为性价比不理想并未得到很广泛的应用。90年代后随着消费电子产品大发展,单片机技术得到了巨大的提高。随着INTEL i960系列特别是后来的ARM系列的广泛应用,32位单片机迅速取代16位单片机的高端地位,并且进入主流市场。而传统的8位单片机的性能也得到了飞速提高,处理能力比起80年代提高了数百倍。目前,高端的32位单片机主频已经超过300MHz,性能直追90年代中期的专用处理器,而普通的型号出厂价格跌落至1美元,最高端的型号也只有10美元。当代单片机系统已经不再只在裸机环境下开发和使用,大量专用的嵌入式操作系统被广泛应用在全系列的单片机上。而在作为掌上电脑和手机核心处理的高端单片机甚至可以直接使用专用的Windows和Linux操作系统。 单片机比专用处理器更适合应用于嵌入式系统,因此它得到了最多的应用。事实上单片机是世界上数量最多的计算机。现代人类生活中所用的几乎每件电子和机械产品中都会集成有单片机。手机、电话、计算器、家用电器、电子玩具、掌上电脑以及鼠标等电脑配件中都配有1-2部单片机。而个人电脑中也会有为数不少的单片机在工作。汽车上一般配备40多部单片机,复杂的工业控制系统上

标准物质生产者情况介绍

关于成为标准物质/标准样品生产者(RMP)能力情况简介 一、基本概念 1 标准物质/标准样品生产者reference material producer 有技术能力全面负责对其按照ISO 指南31 和35 生产及供应的标准物质/ 标准样品确定标准值或其他特性值的机构(组织或公司,公有或私有的)。 2 合作者collaborator 在上有技术能力按照合同(作为分承包方)或自愿方式代表标准物质/标准样品生产者从事(有证)标准物质/标准样品的制作或测定的机构(组织或公司,公有或私有的)。 二、目前国外内情况 由于标准物质/标准样品对于实验室检测和校准结果的重要性,国际实验室认可领域非常关注其质量。国际实验室认可国际合作组织(ILAC)已明确了标准物质/标准样品的生产活动属于合格评定范畴,确定了采用ISO 指南34 作为认可准则,并要求标准物质/标准样品生产者的相关检测能力还应同时满足 ISO/IEC17025“检测和校准实验室能力的通用要求”。中国合格评定国家认可委员会(英文缩写:CNAS)根据国际要求,等同采用ISO 指南34 作为我 国标准物质/标准样品生产者能力的认可准则,在对标准物质/标准样品生产者进行认可时,该准则与CNAS-CL01《检测和校准实验室能力认可准则》同时使用。 从目前国内的情况来看,通过CNAS认可并符合ISO指南34《标准物质/标准样品生产者能力认可准则》的单位大概只有5、6家(截止2009年12月,国内仅有3家(摘自《2009年CNAS认可发展报告》,主要集中在金属及合金、无机标准物质、有机标准物质、环境标准物质、分析气体、离子活度等领域)。目前,国内还没有有关生物毒素类标准物质生产者,如果德诺能够通过资质认可成为第一家生物毒素类标准物质生产者应当是一个不错的契机,前景看好。 三、申请流程 申请RMP认可的生产者,需要按照CNAS的RMP认可准则(CNAS-CL04)

保理业务的案例

短期贸易融资四保理业务案例分析 案例一:业务背景: A公司是客户是一家汽车出口企业: ①向德国某汽配连锁企业出口; ②采用赊销方式结算; ③应收账款的账期为60天。 面临困境:2个月后,德国进口商开始拖延付款,很快就进入破产程序,此时尚有60万美元的未付账款。 案例二:业务背景: B客户是上海的一家上市公司: ①实力雄厚,获得多家银行授信,但公司资金充裕,导致银行授信额度大量闲置; ②80%的出口业务采用赊销方式结算; ③应收账款在总资产中的占比达40%。 面临困境:作为上市公司,B客户的应收账款占比偏高,影响了公司资产的流动性以及社会公众对公司财务状况的正确认识。 案例三:业务背景: C公司是一家汽车电子出口企业: ①主要向美国出口点火线圈; ②采用赊销方式结算; ③毛利润率不到20%。 面临困境:由于汇率波动频繁且幅度较大,C公司一年的汇率损失可能达到5%,大大减少了企业的净利润。 案例四:华通公司出口双保理案例分析 一案例介绍: 2009年美国哥伦比亚服装公司(COLUMBIAFABRICINC.)想从我国华通公司(某从事服装纺织类商品的制造)进口一批服装,金额约为USD7668000。此次美国哥伦比亚服装

公司想用D/Aat90days进行结算,但是我国华通公司在D/A方面涉及较少,并认为资金稍大,占用时间较长,会使自己资金吃紧,影响与其他合作伙伴的合作,因此提出使用出口双保理,双方达成协议同意使用出口双保理。华通公司随即选择了中国银行浙江某分行签订《授信协议》和《扣款申请书》,约定有追索权公开型出口保理授信额度。双方通过签订《国际保理业务合同》约定对该额度的具体使用并且依《授信协议》约定,签订多份相关文件,约定保理届至日即为保理合同买方应付款日。美国方面的进口保理商为美国远东国民银行(FarEastNationalBank)。华通公司于和日向中国银行浙江分行提交两份出口单据(,)总计USD7668000元提出融资申请,按照《国内保理业务合同》的约定,中国银行浙江分行向华通公司支付了3787万元的收购款,受让了华通公司对美国哥伦比亚服装公司所享有RMB48,348,036元的应收账款债权。保理合同约定原告基本收购款按照应收账款债权的%的比例计算。双方共同向美国方面发出了《应收账款债权转让通知书》,美国哥伦比亚服装公司在签收回执上盖章确认并承诺向原告履行付款责任。然而,2009年8月5日,中行收到美国远东国民银行发来的争议通知,内容为此公司年初发给美国进口商托收项下的货物其中部分由于质量与要求不符问题,所以美国哥伦比亚服装公司拒绝付款总计USD的合同货款,并随即附上质量检验证明书。中行立即通知该公司争议内容,希望其与美国公司协商,并要求其返还已付的收购款,华通公司拒绝偿付,认为已经将发票等票据卖给了中国银行浙江分行,进口商不付款是应该由中国银行浙江分行承担。后来由法院判定要归还,华通公司处于无奈只能与进口商协商以1/3的市场价求对方接受有质量问题的部分商品,自己损失部分。

相关文档
最新文档