An Adaptive Framework for Visualizing Unstructured Grids with Time-Varying Scalar

An Adaptive Framework for Visualizing Unstructured Grids with Time-Varying Scalar

Fields

F′a bio F.Bernardon a,?,Steven P.Callahan b,

Jo?a o https://www.360docs.net/doc/109378003.html,ba a,Cl′a udio T.Silva b

a Instituto de Inform′a tica,Federal University of Rio Grande do Sul,Brazil

b Scienti?

c Computing an

d Imaging Institute,University of Utah

Abstract

Interactive visualization of time-varying volume data is essential for many scien-ti?c simulations.This is a challenging problem since this data is often large,can be organized in di?erent formats(regular or irregular grids),with variable instances of time(from hundreds to thousands)and variable domain?elds.It is common to consider subsets of this problem,such as time-varying scalar?elds(TVSFs)on static structured grids,which are suitable for compression using multi-resolution techniques and can be e?ciently rendered using texture-mapping hardware.In this work we propose a rendering system that considers unstructured grids,which do not have the same regular properties crucial to compression and rendering.Our solu-tion simultaneously leverages multiple processors and the graphics processing unit (GPU)to perform decompression,level-of-detail selection,and volume visualization of dynamic data.The resulting framework is general enough to adaptively handle visualization tasks such as direct volume rendering and isosurfacing while allowing the user to control the speed and quality of the animation.

Key words:volume rendering,unstructured grids,time-varying data

?Corresponding author.

Email addresses:fabiofb@inf.ufrgs.br(F′a bio F.Bernardon),

stevec@https://www.360docs.net/doc/109378003.html,(Steven P.Callahan),comba@inf.ufrgs.br(Jo?a o L.D. Comba),csilva@https://www.360docs.net/doc/109378003.html,(Cl′a udio T.Silva).

Preprint submitted to Parallel Computing12February2007

1Introduction

Advances in computational power are enabling the creation of increasingly sophisticated simulations generating vast amounts of data.E?ective analysis of these large datasets is a growing challenge for scientists who must validate that their numerical codes faithfully represent reality.Data exploration through visualization o?ers powerful insights into the reliability and the limitations of simulation results,and fosters the e?ective use of results by non-modelers. Measurements of the real world using high-precision equipment also produces enormous amounts of information that requires fast and precise processing techniques that produce an intuitive result to the user.

However,at this time,there is a mismatch between the simulation and acquir-ing capabilities of existing systems,which are often based on high-resolution time-varying3D unstructured grids,and the availability of visualization tech-niques.In a recent survey article on the topic,Ma[1]says:

“Research so far in time-varying volume data visualization has primarily addressed the problems of encoding and rendering a single scalar variable on a regular grid....Time-varying unstructured grid data sets has been either rendered in a brute force fashion or just resampled and downsampled onto a regular grid for further visualization calculations....”

One of the key problems in handling time-varying data is the raw size of the data that must be processed.For rendering,these datasets need to be stored (and/or staged)in either main memory or GPU memory.Data transfer rates create a bottleneck for the e?ective visualization of these datasets.A number of successful techniques for time-varying regular grids have used compression to mitigate this problem,and allow for better use of resources.Most solutions described in the literature consider only structured grids,where exploiting coherence(either spatial or temporal)is easier due to the regular structure of the datasets.For unstructured grids,however,the compression is more challenging and several issues need to be addressed.

There are four fundamental pieces to adaptively volume render dynamic data. First,compression of the dynamic data for e?cient storage is necessary to avoid exhausting available resources.Second,handling the data transfer of the compressed data is important to maintain interactivity.Third,e?cient volume visualization solutions are necessary to provide high-quality images that lead to scienti?c insight.Furthermore,the framework has to be?exible enough to support multiple visualization techniques as well as data that changes at each frame.Fourth,maintaining a desired level of interactivity or allowing the user to change the speed of the animation is important for the user experience. Therefore,level-of-detail approaches that generally work on static datasets

2

Fig.1.Di?erent time instances of the Turbulent Jet dataset consisting of one mil-lion tetrahedra and rendered at approximately six frames per second using direct volume rendering(top)and isosurfacing(bottom).Our user interface consists of an adjustable orange slider representing the level-of-detail and an adjustable gray slider representing the current time instance.

must be adapted to e?ciently handle the dynamic case.

Since multiple CPUs and programmable GPUs are becoming common for desktop machines,we concentrate on e?ciently using all the available re-sources.Our system performs decompression,object-space sorting,and level-of-detail operations with multiple threads on the CPUs for the next time-step while simultaneously rendering the current time-step using the GPU.This par-allel computation results in only a small overhead for rendering time-varying data over previous static approaches.

To demonstrate the?exibility of our framework,we describe how the two most common visualization techniques for unstructured grids can be easily integrated.Both direct volume rendering as well as isosurface computation are incorporated into our adaptive,time-varying framework.

Though our goal is to eventually handle data that changes in geometry and even topology over time,here we concentrate on the more speci?c case of time-varying scalar?elds on static geometry and topology.The main contributions of this paper are:

?We show how the data transfer bottleneck can be mitigated with compres-sion of time-varying scalar?elds for unstructured grids;

3

?We show how a hardware-assisted volume rendering system can be enhanced to e?ciently prefetch dynamic data by balancing the CPU and GPU loads;?We describe how direct volume rendering and isosurfacing can be incorpo-rated into our adaptive framework;

?We introduce new importance sampling approaches for dynamic level-of-detail that operate on time-varying scalar?elds.

Figure1illustrates our system in action on an unstructured grid represen-tation of the Turbulent Jet dataset.The rest of this paper is organized as follows.Section2surveys related previous work.Section3outlines our system for adaptively volume rendering unstructured grids with time-varying scalar ?elds.The results of our algorithm are shown in Section4,a brief discus-sion follows in Section5,and conclusions and future work are described in Section6.

2Previous Work

The visualization of time-varying data is of obvious importance,and has been the source of substantial research.Here,we are particularly interested in the research literature related to compression and rendering techniques for this kind of data.For a more comprehensive review of the literature,we point the interested reader to the recent surveys by Ma[1]and Ma and Lum[2].

Very little has been done for compressing time-varying data on unstructured grids,therefore all the papers cited below focus on regular grids.Some re-searchers have explored the use of spatial data structures for optimizing the rendering of time-varying datasets[3–5].The Time-Space Partitioning(TSP) Tree used in those papers is based on an octree which is extended to encode one extra dimension by storing a binary tree at each node that represents the evolution of the subtree through time[5].The TSP tree can also store partial sub-images to accelerate rendering by ray-casting.

The compression of time-varying isosurfaces and associated volumetric data with a wavelet transform was?rst proposed in[6].With the advance of texture-based volume rendering and programmable GPUs,several techniques explored shifting data storage and decompression into graphics hardware.Coupling wavelet compression of structured grids with decompression using texturing hardware was discussed in work by Guthe et al.[7,8].They describe how to encode large,static datasets or time-varying datasets to minimize their size, thus reducing data transfer and allowing real-time volume visualization.Sub-sequent work by Strengert et al.[9]extended the previous approaches by em-ploying a distributed rendering strategy on a GPU cluster.Lum et al.[10,11] compress time-varying volumes using the Discrete Cosine Transform(DCT).

4

Because the compressed datasets?t in main memory,they are able to achieve much higher rendering rates than for the uncompressed data,which needs to be incrementally loaded from disk.Because of their sheer size,I/O issues become very important when dealing with time-varying data[12].

More related to our work is the technique proposed by Schneider et al.[13]. Their approach relies on vector quantization to select the best representatives among the di?erence vectors obtained after applying a hierarchical decom-position of structured grids.Representatives are stored in textures and de-compressed using fragment programs on the GPU.Since the multi-resolution representations are applied to a single structured grid,di?erent quantization and compression tables are required for each time instance.Issues regard-ing the quality of rendering from compressed data were discussed by Fout et al.[14]using the approach described by Schneider et al.as a test case. Hardware-assisted volume rendering has received considerable attention in the research community in recent years(for a recent survey,see[15]).Shirley and Tuchman[16]introduced the Projected Tetrahedra(PT)algorithm for de-composing tetrahedra into renderable triangles based on the view direction.

A visibility ordering of the tetrahedra before decomposition and rendering is necessary for correct transparency compositing[17].More recently,Weiler et al.[18]describe a hardware ray caster which marches through the tetrahe-dra on the GPU in several rendering passes.Both of these algorithms require neighbor information of the mesh to correctly traverse the mesh for visibility ordering or ray marching,respectively.An alternative approach was intro-duced by Callahan et al.[19]which considers the tetrahedral mesh as unique triangles that can be e?ciently rendered without neighbor information.This algorithm is ideal for dynamic data because the vertices,scalars,or trian-gles can change with each frame with very little penalty.Subsequent work by Callahan et al.[20]describes a dynamic level-of-detail(LOD)approach that works by sampling the geometry and rendering only a subset of the original mesh.Our system uses a similar approach for LOD,but adapted to handle time-varying data.

Graphics hardware has also been used in recent research to visualize isosur-faces of volumetric data.The classic Marching Cubes algorithm[21]and sub-sequent Marching Tetrahedra algorithm[22]provide a simple way to extract geometry from structured and unstructured meshes in object space.However, image-space techniques are generally more suitable for graphics hardware.One such algorithm was proposed by Sutherland et al.[23]which uses alpha-test functionality and a stencil bu?er to perform plane-tetrahedron intersections. Isosurfaces are computed by interpolating alpha between front and back faces and using XOR operation with the stencil bu?https://www.360docs.net/doc/109378003.html,ter,R¨o ttger et al.[24]re-vised this algorithm and extended the PT algorithm to perform isosurfacing. Instead of breaking up the tetrahedra into triangles,the algorithm creates

5

Fig.2.An overview of our system.(a)Data compression and importance sampling for level-of-detail are performed in preprocessing steps on the CPU.(b)Then during each pass,level-of-detail selection,optional object-space sorting,and data decom-pression for the next step occur in parallel on multiple CPUs.(c)Simultaneously, the image-space sorting and volume visualization are processed on the GPU. smaller tetrahedra which can be projected as triangles and tested for iso-surface intersection using a2D texture.More recent work by Pascucci[25] uses programmable hardware to compute isosurfaces.The algorithm consid-ers each tetrahedron as a quadrilateral,which is sent to the GPU where the vertices are repositioned in a vertex shader to represent the isosurface.Our system takes advantage of recent hardware features to perform isosurface tests directly at the fragment level.Furthermore,our isosurfacing can be used di-rectly in our time-varying framework and naturally bene?t from our dynamic level-of-detail.

Several systems have previously taken advantage of multiple processors and multi-threading to increase performance.The iWalk system[26]by Correa et https://www.360docs.net/doc/109378003.html,es multiple threads to prefetch geometry and manage a working set of renderable primitives for large triangles scenes.More recent work by Vo et al.[27]extends this idea to perform out-of-core management and rendering of large volumetric meshes.Our approach is similar in spirit to these algorithms. We use multiple threads to prefetch changing scalar values,sort geometry,and prepare level-of-detail triangles at each frame.

3Adaptive Time-Varying Volume Rendering

Our system for adaptive time-varying volume rendering of unstructured grids consists of four major components:compression,data transfer,hardware-assisted volume visualization,and dynamic level-of-detail for interactivity. Figure2shows the interplay of these components.

6

Desktop machines with multiple processors and multiple cores are becoming increasingly typical.Thus,interactive systems should explore these features to leverage computational power.For graphics-heavy applications,the CPU cores can be used as an additional cache for the GPU,which may have limited memory capabilities.With this in mind,we have parallelized the operations on the CPU and GPU with multi-threading to achieve interactive visualization. There are currently three threads in our system to parallelize the CPU portion of the code.These threads run while the geometry from the previous time-step is being rendered on the GPU.Therefore,the threads can be considered a prefetching of data in preparation for rasterization.The?rst thread handles decompression of the compressed scalar?eld(see Section3.1).The second thread handles object-space sorting of the geometry(see Section3.3).Finally, the third thread is responsible for rendering.All calls to the graphics API are done entirely in the rendering thread since only one thread at a time may access the API.This thread waits for the?rst two threads to?nish,then copies the decompressed scalar values and sorted indices before rendering.This frees the other threads to compute the scalars and indices for the next time-step. These three threads work entirely in parallel and result in a time-varying visualization that requires very little overhead over static approaches.

3.1Compression

Compression is important to reduce the memory footprint of time-varying data,and the consideration of spatial and temporal coherence of the data is necessary when choosing a strategy.For example,it is common to exploit spatial coherence in time-varying scalar?elds de?ned on structured grids, such as in the approach described by Schneider et al.[13],where a multi-resolution representation of the spatial domain using vector quantization is used.This solution works well when combined with texture-based volume rendering,which requires the decompression to be performed at any given point inside the volume by incorporating the di?erences at each resolution level.

In unstructured grids,the irregularity of topological and geometric informa-tion makes it hard to apply a multi-resolution representation over the spatial domain.In our system we apply compression on the temporal domain by con-sidering scalar values individually for each mesh vertex.By grouping a?xed number of scalar values de?ned over a sequence of time instances we obtain a suitable representation for applying a multi-resolution framework.

Our solution collects blocks of64consecutive scalars associated with each mesh vertex,applies a multi-resolution representation that computes the mean of

7

Fig.3.Decompression of a given time instance for each mesh vertex requires adding the scalar mean of a block to the quantized di?erences recovered from the respective entries(i8and i64)in the codebooks.

each block along with two di?erence vectors of size64and8,and uses vector quantization to obtain two sets of representatives(codebooks)for each class of di?erence vectors.For convenience we use a?xed number of time instances, but compensate for this by increasing the number of entries in the codebooks if temporal coherence is reduced and leads to compression errors.

This solution works well for projective volume rendering that sends mesh faces in visibility ordering to be rendered.At each rendering step,the scalar value for each mesh vertex in a given time instance is decompressed by adding the mean of a given block interval to two di?erence values,which are recovered from the codebooks using two codebook indices i8and i64(Figure3).

3.2Data Transfer

There are several alternatives to consider when decompressing and transferring time-varying data to the volume renderer.This is a critical point in our system and has a great impact on its overall performance.In this section we discuss the alternatives we explored and present our current solution.It is important to point out that with future CPU and GPU con?gurations this solution might need to be revisited.

Since the decompression is done on a per-vertex basis,our?rst approach was to use the vertex shader on the GPU.This requires the storage of codebooks as vertex textures,and the transfer for each vertex of three values as texture coordinates(mean and codebook indices i8and i64).In practice,this solution does not work well because the current generation of graphics hardware does not handle vertex textures e?ciently and incurs several penalties due to cache misses,and the arithmetic calculations in the decompression are too simple

8

to hide this latency.

Our second approach was to use the GPU fragment shader.Since computa-tion is done at a fragment level,the decompression and the interpolation of the scalar value for the fragment is necessary.This requires three decompres-sion steps instead of a single step as with the vertex shader approach(which bene?ts from the interpolation hardware).Also,this computation requires accessing the mean and codebook indices.Sending this information as a sin-gle vertex attribute is not possible due to interpolation,and multiple-vertex attributes increase the amount of data transfer per vertex.As our volume renderer runs in the fragment shader,this solution also increases the shader complexity and thus reduces performance of the system.

Our?nal(and current)solution is to perform the decompression on the CPU. Since codebooks usually?t in CPU memory—a simple paging mechanism can be used for really large data—the main cost of this approach is to per-form the decompression step and send scalar values through the pipeline.This data transfer is also necessary with the other two approaches.The number of decompression steps is reduced to the number of vertices,unlike the vertex shader approach which requires three times the number of faces.

3.3Volume Rendering

Our system is based on the Hardware-Assisted Visibility Sorting1(HAVS) algorithm of Callahan et al.[19].Figure2shows how the volume rendering system handles time-varying data.Our framework supports both direct volume rendering as well as isosurfacing.

The HAVS algorithm is a general visibility ordering algorithm for renderable primitives that works in both object-space and image-space.In object space the unique triangles that compose the tetrahedral mesh are sorted approxi-mately by their centroids.This step occurs entirely on the CPU.In image-space,the triangles are sorted and composited in correct visibility order using a?xed size A-bu?er called the k-bu?er.The k-bu?er is implemented entirely on the GPU using fragment shaders.Because the HAVS algorithm operates on triangles with no need for neighbor information,it provides a?exible frame-work for handling dynamic data.In this case,the triangles can be stored on the GPU for e?ciency,and the scalar values as well as the object-space ordering of the triangles can be streamed to the GPU at each time instance.

Our algorithm extends the HAVS algorithm with time-varying data with vir-tually no overhead by taking advantage of the HAVS architecture.Since work 1Source code available at https://www.360docs.net/doc/109378003.html,

9

performed on the CPU can be performed simultaneously to work on the GPU, we can leverage this parallelization to prefetch the time-varying data.During the GPU rendering stage of the current time instance,we use the CPU to decompress the time-varying?eld of the next time-step and prepare it for ren-dering.We also distinguish user provided viewing transformations that a?ect visibility order from those that do not and perform visibility ordering only when necessary.Therefore,the object-space centroid sort only occurs on the CPU during frames that have a change in the rotation transformation.This avoids unnecessary computation when viewing time-varying data.

To manage the time-stepping of the time-varying data,our algorithm auto-matically increments the time instance at each frame.To allow more control from the user,we also provide a slider for interactive exploration of the time instances.

3.3.1Direct Volume Rendering

The HAVS algorithm performs direct volume rendering with the use of pre-integration[28].In a preprocess,a three-dimensional lookup table is computed that contains the color and opacity for every set of scalars and distances between them(s f,s b,d).Then,as fragments are rasterized,the k-bu?er is used to retrieve the closest two fragments and the front scalar s f,back scalar s b, and distance d between them is calculated.The color and opacity for the gap between the fragments is determined with a texture lookup,then composited into the framebu?er.More details on direct volume rendering with the HAVS algorithm can be found in[19].

3.3.2Isosurfaces

We extend the HAVS algorithm to perform isosurfacing in our time-varying framework.The fragments are sorted using the k-bu?er as described above. However,instead of compositing the integral contribution for the gap between the?rst and second fragments,we perform a simple test to determine if the isosurface value lies between them.If so,the isosurface depth is determined by interpolating between the depths of the two fragments.The result is a texture that contains the depth for the isosurface at each pixel(i.e.,a depth bu?er), which can be displayed directly as a color bu?er or post-processed to include shading.

There are several di?erences between our isosurfacing using HAVS and previ-ous hardware-assisted approaches.First,with the advent of predicates in the fragment shader,a direct isosurface comparison can be performed e?ciently, without the need of texture lookups as in previous work by R¨o ttger et al.[24]. Second,the k-bu?er naturally provides the means to handle multiple transpar-

10

ent isosurfaces by compositing the isosurface fragments in the order that they are extracted.Third,to handle lighting of the isosurfaces,we avoid keeping normals in our k-bu?er by using an extra shading pass.One option is a simple depth-based shading[29],which may not give su?cient detail of the true na-ture of the surface.Another option is to use a gradient-free shading approach similar to work by Desgranges et al.[30],which uses a preliminary pass over the geometry to compute a shadow bu?er.Instead,we avoid this extra geom-etry pass by using screen-space shading of the computed isosurface through central di?erencing on the depth bu?er[31].This results in fast and high qual-ity shading.The isosurface?gures shown in this paper were generated with the latter shading model.Finally,since our isosurfacing algorithm is based on our direct volume rendering algorithm,the same level-of-detail strategies can be used to increase performance.However,level-of-detail rendering may introduce discontinuities in the depths that de?ne the isosurface,adversely a?ecting the quality of the image-space shading.Thus,we perform an addi-tional smoothing pass on the depths using a convolution?lter,which removes sharp discontinuities,before the?nal shading.This extra pass is inexpensive and typically only necessary at low levels-of-detail.

3.4Time-Varying Level-of-Detail

Recent work by Callahan et al.[20]introduces a new dynamic level-of-detail (LOD)approach that works by using a sample-based simpli?cation of the ge-ometry.This algorithm operates by assigning an importance to each triangle in the mesh in a preprocessing step based on properties of the original geometry. Then,for each pass of the volume renderer,a subset of the original geometry is selected for rendering based on the frame rate of the previous pass.This recent LOD strategy was incorporated into the original HAVS algorithm to provide a more interactive user experience.

An important consideration for visualizing time-varying data is the rate at which the data is progressing through the time instances.To address this problem,our algorithm uses this LOD approach to allow the user to control the speed and quality of the animation.Since we are dealing with time-varying scalar?elds,heuristics that attempt to optimize the quality of the mesh based on the scalar?eld are ideal.However,approaches that are based on a static mesh can be poor approximations when considering a dynamically changing scalar?eld.

Callahan et al.introduce a heuristic based on the scalar?eld of a static mesh for assigning an importance to the triangles.The idea is to create a scalar his-togram and use strati?ed sampling to stochastically select the triangles that cover the entire range of scalars.This approach works well for static geome-

11

try,but may miss important regions if applied to a time-step that does not represent the whole time-series well.Recent work by Akiba et al.[32]classi-?es time-varying datasets as either statistically dynamic or statistically static depending on the behavior of the time histogram of the scalars.A statistically dynamic dataset may have scalars that change dramatically in some regions of time,but remain constant during others.In contrast,the scalars in statisti-cally static datasets change consistently throughout the time-series.Because datasets vary di?erently,we have developed two sampling strategies for dy-namic LOD:a local approach for statistically dynamic datasets and a global approach for statistically static datasets.

To incorporate these LOD strategies into our time-varying system,we allow two types of interactions based on user preference.The?rst is to keep the animation at a desired frame-rate independent of the data size or viewing interaction.This dynamic approach adjusts the LOD on the?y to maintain interactivity.Our second type of interaction allows the user to use a slider to control the LOD.This slider dynamically changes the speed of the anima-tion by setting the LOD manually.Since visibility ordering-dependent viewing transformations occur on the CPU in parallel to the GPU rendering,they do not change the LOD or speed of the animation.Figure2shows the interaction of the LOD algorithm with the time-varying data.

3.4.1Local Sampling

Datasets with scalars that vary substantially in some time regions,but very little in others bene?t from a local sampling approach.The general idea is to apply methods for static datasets to multiple time-steps of a time-varying dataset and change the sampling strategy to correspond to the current region. We perform local sampling by selecting time-steps at regular intervals in the time-series and importance sampling the triangles in those time-steps using the previously described stochastic sampling of scalars.Since the LOD selects a subset of triangles ordered by importance,we create a separate list of triangles for each partition of the time sequence.Then,during each rendering step, the region is determined and the corresponding list is used for rendering. Since changing the list also requires the triangles to be resorted for rendering, we perform this operation in the sorting thread to minimize delays between frames.

Statistically dynamic datasets bene?t from this approach because the triangles are locally optimized for rendering.The disadvantage of this approach is that because the active set of renderable triangles may change between time inter-vals,?ickering may occur.However,this is a minor issue when using dynamic LOD because the number of triangles drawn at each frame may be changing anyway,to maintain interactivity.

12

(a)(b)(c)(d)

(e)(f)(g)(h)

Fig.4.Time-varying level-of-detail(LOD)strategy using the coe?cient of variance for the Torso dataset(50K tetrahedra and360time steps).For a close-up view using direct volume rendering:(a)100%LOD at18fps,(b)50%LOD at40fps,(c)25% LOD at63fps,and(d)10%LOD at125fps.For the same view using isosurfacing: (e)100%LOD at33fps,(f)50%LOD at63fps,(g)25%LOD at125fps,and(h) 10%LOD at250fps.Front isosurface fragments are shown in light blue and back fragments are shown in dark blue.

Certain datasets may contain regions in time where none of the scalars are changing and other regions where many scalars are changing.These datasets would bene?t from a non-linear partitioning of the time sequence(i.e.,log-arithmic).A more automatic approach to partitioning the time-steps is to greedily select areas with the highest scalar variance to ensure that important changes are not missed.In our experiments,a regular interval is used because our experimental datasets do not fall into this category.

3.4.2Global Sampling

A global strategy is desirable in datasets that are statistically static due to its simplicity and e?ciency.For global sampling,one ordering is determined that attempts to optimize all time-steps.This has the advantage that it does not require changing the triangle ordering between frames and thus gives a smoother appearance during an animation.

A sampling can be obtained globally using a statistical measure of the scalars. For n time-steps,consider the n scalar values s for one vertex as an inde-pendent random variable X,then the expectation at that position can be expressed as

13

E[X]=

n

1

s(P r{X=s}),

where P r{X=s}=1/n.The dispersion of the probability distribution of the scalars at a vertex can then be expressed as the variance of the expectation: V ar[X]=E[X2]?E2[X]

=

n

1

(

s2

n

)?(

n

1

s

n

)2

In essence,this gives a spread of the scalars from their expectation.To measure dispersion of probability distributions with widely di?ering means,it is com-mon to use the coe?cient of variation C v,which is the ratio of the standard deviation to the expectation.This metric has been used in related research for transfer function speci?cation on time-varying data[32,33]and as a mea-surement for spatial and temporal error in a time-varying structured grids[5]. Thus,for each triangle t,the importance can be assigned by calculating the sum of the C v for each vertex as follows:

C v(t)=

3

i=1

V ar[X t(i)]

E[X t(i)]

This results in a dimensionless quantity that can be used for assigning impor-tance to each face by directly comparing the amount of change that occurs at each triangle over time.

The algorithm provides good quality visualizations even at lower levels-of-detail because the regions of interest(those that are changing)are given a higher importance(see Figure4).The described heuristic works especially well in statistically static datasets if the mesh has regions that change very little over time since they are usually assigned a lower opacity and their removal introduces very little visual di?erence.

4Results

In this section we report results obtained using a PC with Pentium D3.2 GHz Processors,2GB of RAM,and an NVidia7800GTX GPU with256MB RAM.All images were generated at512×512resolution.

14

Table1

Results of compression sizes,ratios,and error.

Mesh Num Num Time Size Size Comp.SNR SNR Max Verts Tets Instances TVSF VQ Ratio Min Max Error SPX136K101K649.0M504K18.339.542.00.005

SPX2162K808K6440.5M 2.0M20.639.242.00.009

SPXF19K12K19214.7M 2.0M7.120.830.20.014

Blunt40K183K6410.0M552K18.641.744.40.005

Torso8K50K36011.2M 1.0M11.420.528.10.002

TJet160K1M15093.8M 2.7M34.7 5.317.90.204

4.1Datasets

The datasets used in our tests are diverse in size and number of time in-stances.The time-varying scalars on the SPX1,SPX2and Blunt Fin datasets were procedurally generated by linear interpolating the original scalars to zero over time.The Torso dataset shows the result of a simulation of a rotating dipole in the mesh.The SPX-Force(SPXF)dataset represents the magnitude of reaction forces obtained when a vertical force is applied to a mass-spring model that has as particles the mesh vertices and as springs the edges between mesh vertices.Finally,the Turbulent Jet(TJet)dataset represents a regular time-varying dataset that was tetrahedralized and simpli?ed to a reduced rep-resentation of the original.The meshes used in our tests with their respective sizes are listed in Table1and results showing di?erent time instances are shown in Figures1,4,5,and6.

4.2Compression

The compression of Time-Varying Scalar Field(TVSF)data uses an adaption of the vector quantization code written by Schneider et al.[13],as described in Section3.1.The original code works with structured grids with building blocks of4×4×4(for a total of64values per block).To adapt its use for unstructured grids it is necessary to group TVSF data into basic blocks with the same amount of values.For each vertex in the unstructured grid,the scalar values corresponding to64contiguous instances of time are grouped into a basic block and sent to the VQ code.

The VQ code produced two codebooks containing di?erence vectors for the ?rst and second level in the multi-resolution representation,each with256 entries(64×256and8×256codebooks).For our synthetic datasets this con?guration led to acceptable compression results as seen on Table1.How-ever,for the TJet and SPXF datasets we increased the number of entries in the codebook due to the compression error obtained.Both datasets were

15

compressed using codebooks with1024entries.

The size of TVSF data without compression is given by size u=v×t×4B, where v is the number of mesh vertices,t is the number of time instances in each dataset,and each scalar uses four bytes(?oat).The compressed size using VQ is equal to size vq=v×c×3×4B+c×size codebook,where c is the number of codebooks used(c=t/64),s is the number of entries in the codebook(256or1024),each vertex requires3values per codebook (mean plus codebook indices i8and i64),and each codebook size corresponds to s×64×4B+s×8×4B.

In Table1we summarize the compression results we obtained.In addition to the signal-to-noise ration(SNR)given by the VQ code,we also measured the minimum and maximum discrepancy between the original and quantized values.Results show that procedurally generated datasets have a higher SNR and smaller discrepancy,since they have a smoother variation in their scalars over time.The TJet dataset has smaller SNR values because it represents a real?uid simulation,but it also led to higher compression ratios due to its?xed codebook sizes.In general,the quality of the compression corresponds with the variance in the scalars between steps.Thus,datasets with smoother transitions result in less compression error.A limitation of the current approach is that because it exploits temporal coherence,it may not be suitable for datasets with abrupt changes between time instances.In this case,compression methods that focus more on spatial coherence may be necessary.

4.3Rendering

The rendering system allows the user to interactively inspect the time-varying data,continuously play all time instances,and pause or even manually select a given time instance by dragging a slider.Level-of-detail changes dynamically to achieve interactive frame rates,or can be manually set using a slider.Changing from direct volume rendering to isosurfacing is accomplished by pressing a key. Similarly,the isovalue can be interactively changed with the press of a key and incurs no performance overhead.

Rendering time statistics were produced using a?xed number of viewpoints.In Table2we show timing results for our experimental datasets.To compare the overhead of our system with the original HAVS system that handles only static data,we also measure the rendering rates for static scalar?elds without multi-threading.The dynamic overhead is minimal even for the larger datasets.In fact,for some datasets,our multi-threading approach is faster with dynamic data than single threading with static data.Note that for the smaller datasets, we do not report frame-rates greater than60frames per second since it is

16

(a)(b)(c)

(d)(e)(f)

https://www.360docs.net/doc/109378003.html,parison of direct volume rendering using (a)uncompressed and (d)compressed data on the TJet dataset(1M tetrahedra,150time steps).Level-of-Detail is compared at 5%for the TJet and 3%for the Torso dataset(50K tetrahedra,360time steps)using (b)(c)our local approach and (e)(f)our global approach.di?cult to accurately measure higher rates.

In addition to the compression results described above,we evaluate the image quality for all datasets by comparing it against the rendering from uncom-pressed data.For most datasets the di?erence in image quality was minimal.However,for the TJet dataset (the one with the smaller SNR values),there are some small di?erences that can be observed in close-up views of the simulation (see Figure 5).

4.4Level-of-Detail

Our sample-based level-of-detail for time-varying scalar ?elds computes the importance of the faces in a preprocessing step that takes less than two seconds for the global strategy or for the local strategy using six intervals,even for the largest dataset in our experiments.In addition,there is no noticeable overhead in adjusting the level-of-detail at a per frame basis because only the number of triangles in the current frame is computed [20].Figure 4shows the results of our global level-of-detail strategy on the Torso dataset at decreasing levels-of-detail.Figure 5shows a comparison of our global and local sampling strategies.To capture an accurate comparison,the local sampling results are

17

Table2

Performance measures for static and time-varying(TV)scalar?elds for direct vol-ume rendering(DVR)and isosurfacing(Iso).Static measurements were taken with-out multithreading.Performance is reported for each dataset with object-space sort-ing(during rotations)and without object-space sorting(otherwise).

Mesh Sort DVR Static DVR TV DVR TV Iso Static Iso TV Iso TV

FPS FPS Tets/s FPS FPS Tets/s SPX1Y24.232 3.3M26.428.2 2.9M

SPX1N42.641.7 4.3M51.143.5 4.5M

SPX2Y 2.8 2.9 2.4M 2.9 2.9 2.4M

SPX2N7.67.5 6.2M8.28.2 6.8

SPXF Y>60>600.7M>60>600.7M

SPXF N>60>600.7M>60>600.7M

Blunt Y16.120.4 3.8M15.919.5 3.6M

Blunt N25.627.5 5.2M31.231.1 5.8M

Torso Y40.631.2 1.6M44.831.2 1.6M

Torso N>60>60 3.1M>60>60 3.1M

TJet Y 2.3 2.1 2.1M 2.2 2.4 2.4M

TJet N 6.1 5.8 5.8M 5.76 6.0M shown in the middle of an interval,thus showing an average quality.In our experiments,the frame-rates increase at the same rate as the level-of-detail decreases(see Figure4)for both strategies.

5Discussion

This paper extends on our previously published work[34].There are three main contributions that we present here that were not in the original paper. First,we describe a multi-threaded framework to improve performance and distribute computation between multiple cores of the CPU and the GPU.Sec-ond,we expand our description of time-varying level-of-detail to include both global and local approaches to more accurately handle a variety of data types. Third,we generalize our rendering algorithm to handle isosurfacing as well as direct volume rendering.This provides a fast time-varying solution to isosur-facing with level-of-detail and is important to show that our framework can handle the two most common visualization techniques for unstructured grids. Apart from these larger changes,many minor changes in the text,?gures,and results were made to correspond with these new contributions.

An important consideration for the framework we have presented is the scal-ability of the solution on current and future hardware con?gurations.De-velopment of faster processors is reaching physical limits that are expensive and di?cult to overcome.This is leading the industry to shift from the pro-

18

duction of faster single-core processors to multi-core machines.On the other hand,graphics hardware has not yet reached these same physical limitations. Even so,hardware vendors are already providing multiple GPUs along with multiple CPUs.The use of parallel technology ensures that the processing power of commodity computers keeps growing independently of some physical limitations,but new applications must be developed considering this new real-ity to take full advantage of the new features.In particular,for data-intensive graphics applications,an e?cient load balance needs to be maintained between resources to provide interactivity.In this paper,we focus on this multi-core con?guration that is becoming increasingly popular on commodity machines instead of focusing on traditional CPU clusters.

To take advantage of multi-core machines with programmable graphics hard-ware,we separate the computation into three components controlled by dif-ferent threads.For the largest dataset in our experiments,the computation time for the three components is distributed as follows:2%decompression, 55%sorting,and45%rendering.For the smaller datasets,the processing time shifts slightly more towards rendering.With more available resources,the ren-dering and the sorting phases could bene?t from additional parallelization.

To further parallelize the sorting thread,the computation could be split amongst multiple cores on the CPU.This could be done in two ways.The?rst is to use a sort-?rst approach that performs parallel sorting on the geometry in screen space(see Gasarch et al.[35]),then pushes the entire geometry to the graphics hardware.The second is a sort-last approach that breaks the geome-try into chunks that are sorted separately,and sent to the graphics hardware for rendering and compositing(see Vo et al.[27]).

Because of the parallel nature of modern GPUs,the vertex and fragment pro-cessing automatically occurs in parallel.Even so,multiple-GPU machines are becoming increasingly common.The e?ective use of this technology,however, is a complex task,especially for scienti?c computing.We have performed some tentatives tests of our framework on a computer with an NVidia SLI con?g-uration.SLI,or Scalable Link Interface,is an NVidia technology developed to synchronize multiple GPUs(currently two or four)inside one computer. This technology was developed to automatically enhance the performance of graphics applications,and o?er two new operation modes:Split Frame Ren-dering(SFR)and Alternate Frame Rendering(AFR).SFR splits the screen into multiple regions and assigns each region to a GPU.AFR renders every frame on a di?erent GPU,cycling between all available resources.Our ex-periments with both SFR and AFR on a dual SLI machine did not improve performance with our volume rendering algorithm.The SFR method must perform k-bu?er synchronization on the cards so frequently that no perfor-mance bene?t is achieved.With AFR,the GPU selection is controlled by the driver and changes at the end of each rendering pass.With a multi-pass ren-

19

dering algorithm,this forces synchronization between GPUs and results in the same problem as SFR.In the future,as more control of these features be-comes available,we will be able to take advantage of multiple-GPU machines to improve performance.

6Conclusion

Rendering dynamic data is a challenging problem in volume visualization. In this paper we have shown how time-varying scalar?elds on unstructured grids can be e?ciently rendered using multiple visualization techniques with virtually no penalty in performance.In fact,for the larger datasets in our experiments,time-varying rendering only incurred a performance penalty of 6%or less.We have described how vector quantization can be employed with minimal error to mitigate the data transfer bottleneck while leveraging a GPU-assisted volume rendering system to achieve interactive rendering rates.Our algorithm exploits both the CPU and GPU concurrently to balance the com-putation load and avoid idle resources.In addition,we have introduced new time-varying approaches for dynamic level-of-detail that improve upon exist-ing techniques for static data and allows the user to control the interactivity of the animation.Our algorithm is simple,easily implemented,and most im-portantly,it closes the gap between rendering time-varying data on structured and unstructured grids.To our knowledge this is the?rst system for handling time-varying data on unstructured grids in an interactive manner.

In the future,we plan to explore the VQ approach to?nd a general way of choosing its parameters based on dataset characteristics.Also,when next generation graphics cards become available,we would like to revisit our GPU solution to take advantage of new features.Finally,we would like to explore solutions for time-varying geometry and topology.

7Acknowledgments

The authors thank J.Schneider for the VQ code,Louis Bavoil for the iso-surfacing shaders,Mike Callahan and the SCIRun team at the University of Utah for the Torso dataset,Bruno Notrosso(′Electricit′e de France)for the SPX dataset,Kwan-Liu Ma for the TJet dataset,Huy Vo for the tetrahe-dral simpli?cation,and NVIDIA from donated hardware.The work of F′a bio Bernardon and Jo?a o Comba is supported by a CNPq grant478445/2004-0. The work of Steven Callahan and Cl′a udio Silva has been supported by the National Science Foundation under grants CCF-0401498,EIA-0323604,OISE-0405402,IIS-0513692,and CCF-0528201,the Department of Energy,an IBM

20

神州数码交换机路由器命令汇总(最简输入版)

神州数码交换机路由器命令汇总(部分) 2016年3月23日星期三修改_________________________________________________________________________ 注:本文档命令为最简命令,如不懂请在机器上实验 注:交换机版本信息: DCRS-5650-28(R4) Device, Compiled on Aug 12 10:58:26 2013 sysLocation China CPU Mac 00:03:0f:24:a2:a7 Vlan MAC 00:03:0f:24:a2:a6 SoftWare Version 7.0.3.1(B0043.0003) BootRom Version 7.1.103 HardWare Version 2.0.1 CPLD Version N/A Serial No.:1 Copyright (C) 2001-2013 by Digital China Networks Limited. All rights reserved Last reboot is warm reset. Uptime is 0 weeks, 0 days, 1 hours, 42 minutes 路由器版本信息: Digital China Networks Limited Internetwork Operating System Software DCR-2659 Series Software, Version 1.3.3H (MIDDLE), RELEASE SOFTWARE Copyright 2012 by Digital China Networks(BeiJing) Limited Compiled: 2012-06-07 11:58:07 by system, Image text-base: 0x6004 ROM: System Bootstrap, Version 0.4.2 Serial num:8IRTJ610CA15000001, ID num:201404 System image file is "DCR-2659_1.3.3H.bin" Digital China-DCR-2659 (PowerPC) Processor 65536K bytes of memory,16384K bytes of flash Router uptime is 0:00:44:44, The current time: 2002-01-01 00:44:44 Slot 0: SCC Slot Port 0: 10/100Mbps full-duplex Ethernet Port 1: 2M full-duplex Serial Port 2: 2M full-duplex Serial Port 3: 1000Mbps full-duplex Ethernet Port 4: 1000Mbps full-duplex Ethernet Port 5: 1000Mbps full-duplex Ethernet Port 6: 1000Mbps full-duplex Ethernet

XXX医院无线网建设方案

XXXX医院 无线查房系统简介 神州数码网络有限公司 2015年11月

需求分析 无线查房是医院HIS 系统的一项应用。随着电子病历在医院的普及,医疗影像和心电图等也逐步变为数字化图像,而数字化价值的真正体现主要在于医生在诊断和查房时能够随时调阅病人相关病历。现在已有不少医院成功地实现了无线查房系统,医生只需轻轻点击随身携带的平板电脑或PDA,就可调阅病人的病历、医嘱和各种检查、化验以及护理等信息,同时可以直接在床边下医嘱,记录病情变化,并即时传输至科室和医院的管理终端。有了这套无线查房系统,医生再也不用抱着厚厚的病历和医嘱走进病房,他们可以利用手中的工具,更高效更便捷地工作。 一、无线查房系统需求分析 移动医疗覆盖的范围极为广泛,如病区移动查房、院外移动医疗及办公应用、母婴管理、特殊病人管理、医院特殊重地管理、医疗设备管理、特殊药品监管等等,而医院目前较为紧迫的是为了提高工作效率、减低医疗错误,如何解决医生护士病区移动查房的需求。 基于如上的业务需求,衍生出移动护理和无线查房的需求,即将病区进行无线信号覆盖,医生通过移动医疗推车及平板电脑,实行病人床边医疗服务,包括病历查看、书写、查看检验检查报告、医嘱录入等。护士进行床边医嘱执行时,首先用移动终端扫描病人二维或RFID 腕带确认病人身份,然后再扫描病人药物条码,核对医嘱执行(记录所有医嘱执行信息、执行时间信息以及执行护士的身份信息)。给药流程结束后,所有流程信息均被记录在HIS 中,便于医院管理查询以及事后的追踪,当然护士也可通过移动终端直接在患者床边采集和录入病人体征数据等关键信息。 二、病区内无线AP位置部署方案 从病房布局来看,常见的病房布局主要有以下两种。 目前,我国医院楼宇主要分为板型建筑布局和不规则建筑布局两类,病房主要呈现以下特点: (1)病区内病房密度较高,墙体为砖墙结构,且每间房间自带卫生间 病区内会存在更多墙体障碍物,必须通过增加无线AP的部署数量或适当的信号功分以减少信号覆盖的盲区。

神州数码交换机“生成树”配置

神州数码交换机“生成树”配置 SwitchA配置: SwitchA(config)#spanning-tree(开启生成树)SwitchA(config)#spanning-tree mode mstp (选择生成树模式) SwitchA(config)#spanning-tree mst configuration (进入生成树实例配置) SwitchA(config-mstp-region)#name MSTP (设置MSTP域名为MSTP) SwitchA(config-mstp-region)#revision-level 2 (设置MSTP修正级别) SwitchA(config-mstp-region)#instance 0 vlan 10 (创建实例0将Vlan10划分进去) SwitchA(config-mstp-region)#instance 1 vlan 20 (创建实例1将Vlan20划分进去) SwitchA(config)#spanning mst 0 priority 0

(配置实例0的优先级为0,也是交换机的优先级,根交换机) SwitchA(config)#spanning mst 1 priority 4096 注:这儿的优先级越低越优先,优先级默认为32768,只能为4096的倍数。 SwitchB配置: SwitchB(config)#spanning-tree(开启生成树)SwitchB(config)#spanning-tree mode mstp (选择生成树模式) SwitchB(config)#spanning-tree mst configuration (进入生成树实例配置) SwitchB(config-mstp-region)#name MSTP (设置MSTP域名为MSTP) SwitchB(config-mstp-region)#revision-level 2 (设置MSTP修正级别) SwitchB(config-mstp-region)#instance 0 vlan 10 (创建实例0将Vlan10划分进去) SwitchB(config-mstp-region)#instance 1 vlan 20 (创建实例1将Vlan20划分进去) SwitchA(config)#spanning mst 0 priority 4096 SwitchA(config)#spanning mst 1 priority 0

神州数码交换机配置实例

第7章综合实训 7.1 综合实训一 1.背景介绍 下图为园区网络的拓扑图,网络中使用了两台三层交换机DCRS-5526A和DCRS-5526B 提供内部网络的互联,网络边缘采用了一台路由器DCR-1702用于连接到internet,DCR-2611用于模拟Internet中的路由器。 DCRS-5526A上连接一台PC,PC处于Vlan100中。DCRS-5526B连接一台FTP服务器和一台打印服务器,两台服务器处于VLAN200中,DCRS-5526A与DCRS-5526B之间使用交换机间链路相连。DCRS-5526A与DCRS-5526B使用具有三层特性的物理端口与DCR-1702相连。在internet上有一台与DCR-2611相连的外部Web服务器。 说明:本实验采用的设备均为神州数码公司网络产品。其中DCRS-5526为三层交换机。 综合实训实验图如图7.1所示。 图7.1 综合实训1实验图 为了实现网络资源的共享,需要PC机能够访问内部网络中的FTP服务器,以实现文件的上传和下载,并且PC机需要连接到打印服务器以进行远程的打印操作,PC机需要能够通过园区网络连接到internet的Web服务器,并能够进行Web网页的浏览。 其中各计算机的IP地址及网关配置说明 计算机IP 子网掩码网关 PC机192.168.100.100 255.255.255.0 192.168.100.1 FTP服务器192.168.200.10 255.255.255.0 192.168.200.1 打印服务器192.168.200.20 255.255.255.0 192.168.200.1 Web服务器100.1.1.2 255.255.255.0 100.1.1.1

神州数码数据中心搬迁解决方案

数据中心迁移解决方案 解决方案概述 随着许多企业的日益发展壮大,IT规模也相应急剧扩大。许多企业面临原有数据中心已不能满足企业发展需要的问题,从而衍伸出更换数据中心的需求。 机房搬迁涉及对企业的IT资源的转移、升级,包括了搬迁前的对现有系统的备份、搬迁过程的安全和搬迁后系统的完整恢复一系列过程,以及对系统搬迁的时间性都有比较严格的要求。 神州数码依托自身在数据中心整合方面丰富的实施经验和大量的专业IT人员储备,可以为用户提供系统化、高效化、安全化的数据中心迁移整体解决方案。具体实施包含风险评估、应用及物理设备迁移规划、应急方案制定等,并提供搬迁前后系统健康性检查、数据容灾管理等专业服务从而全面保障整个实施过程高效合理安全。 适用行业及需求 适用于金融、保险、制造业等拥有中大型IT数据中心的企业在新建数据中心、扩建或迁移数据中心、数据中心集中整合等业务需求客户收益 保障了客户所有设备及数据安全性 使得客户可以利用迁移完成自有IT资源淘汰、新购、升级等整

合进程 方案示意图

部分成功案例 海康人寿: 海康人寿应业务发展需要将其分布在上海市内3个子公司共4个机房进行构建统一数据中心并数据大集中,涉及设备167台,搬迁准备工作从2009年5月初开始,通过双方多次沟通准备,于2009年9月起对海康分布在上海市内3个分公司共4个机房所有设备分7次作整体搬迁。此项目神州数码不但承担用户机房搬迁,并且承担用户新机房规划及弱电工程建设,网络布线,应用梳理并派出专职项目经理代表用户方与其他相关供应商进行沟通以及监督整个搬迁过程中所有相关事宜的工程质量。本项目搬迁计划周密,项目管理规范、风险控制得当。是近年来多点多次综合搬迁项目成功的经典范例。 天地国际: 天地国际运输公司(TNT)作为全球领先的企业级快递公司,每年全球递送2.3亿票货件。随着亚太区业务规模的快速发展,原有

神州数码DCN DCWL-ZF-7900系列AP产品彩页V2.5

DCWL-ZF-7900系列 802.11a/b/g/n 智能无线AP DCWL-ZF-7942AP 室内智能无线AP DCWL-ZF-7962AP 室内智能无线AP DCWL-ZF-7962OT 室外大功率智能无线AP 产品概述 DCWL-ZF-7900系列智能无线接入点(AP, Access Point)是神州数码网络(以下简称DCN)为行业用户推出的新一代兼容802.11n 的高性能千兆无线接入点设备,可提供相当于传统802.11a/b/g 网络5倍以上的无线接入速率,能够覆盖更大的范围。DCWL-ZF-7900系列AP 上行接口采用千兆以太网接口接入,突破了百兆以太网接口的限制,使无线多媒体应用成为现实。 DCN DCWL-ZF-7900系列AP 支持Fat 和Fit 两种工作模式,根据网络规划的需要,可灵活地在Fat 和Fit 两种工作模式中切换。DCWL-ZF-7900系列产品作为瘦AP(Fit AP)时,需要与DCN 智能无线控制器 (DCWL-ZD 系列)产品配套使用;作为胖AP(Fat AP)时,可独立组网。DCWL-ZF-7900系列产品支持Fat/Fit 两种工作模式的特性,有利于将客户的WLAN 网络由小型网络平滑升级到大型网络,从而很好地保护了用户的投资。 DCN DCWL-ZF-7900系列AP 通过专有的BeamFlex TM 智能天线技术,“全向覆盖,定向动态增益”,自动适应各种环境的实时变化,保证始终如一的高性能,拓展了覆盖范围,并更好地提升了多媒体应用效果。这意味着同样的无线环境下,DCN 所需使用的AP 数目更少,用户满意度更高,系统性价比更佳。 DCN DCWL-ZF-7900系列产品共分为DCWL-ZF-7942AP ,DCWL-ZF-7962AP 和DCWL-ZF-7962OT 三个型号,分别支持单频2.4GHz 802.11n 和双频2.4GHz &5GHz 802.11n ,适用于室内和室外两种环境,是企业园区WLAN 接入、校园覆盖、运营商热点覆盖等应用环境的最理想的高速率无线接入点设备。 主要特性 ? 提供高速率宽带无线接入 DCN DCWL-ZF-7900系列AP 支持802.11abgn 标准。 ? 提供千兆以太网接口有线连接

神州数码交换机基本配置

神州数码交换机基本配置 恢复交换机的出厂设置 命令:set default 功能:恢复交换机的出厂设置。 命令模式:特权用户配置模式。 使用指南:恢复交换机的出厂设置,即用户对交换机做的所有配置都消失,用户重新启动交换机后,出现的提示与交换机首次上电一样。 注意:配置本命令后,必须执行write 命令,进行配置保留后重启交换机即可使交换机恢复到出厂设置。 举例: Switch#set default Are you sure? [Y/N] = y Switch#write Switch#reload 进入交换机的Setup配置模式 命令:setup 功能:进入交换机的Setup配置模式。 命令模式:特权用户配置模式。 使用指南:在Setup配置模式下用户可进行IP地址、Web 服务等的配置。 举例: Switch#setup Setup Configuration ---System Configuration Dialog--- At any point you may enter Ctrl+C to exit. Default settings are in square brackets [ ]. If you don't want to change the default settings, you can input enter. Continue with configuration dialog? [y/n]:y Please select language [0]:English [1]:中文 Selection(0|1) [0]:0 Configure menu [0]:Config hostname [1]:Config interface-Vlan1 [2]:Config telnet-server [3]:Config web-server [4]:Config SNMP [5]:Exit setup configuration without saving [6]:Exit setup configuration after saving

神州、交换机知识要点

神州数码交换机路由器知识要点 https://www.360docs.net/doc/109378003.html, 2010年02月27日来源:本站原创作者:佚名收藏本文交换机 1. 交换机恢复出厂设置及其基本配置. 1) //进入特权模式 2) del config.text 2. Telnet方式管理交换机. 1) //进入全局配置模式 2) enable password 0 [密码] 3) Line 0 4 4) Password 0 [密码] 5) Login 3. 交换机文件备份、升级、还原。 1) rgnos.bin系统文件 2) config.text配置文件 4. Enable密码丢失的解决办法 1) 重启 2) CTRL+C 3) 选择 4 (file) 4) 1 (移除) 5) Config.text 6) Laod (重启) 5. 交换机Vlan的划分

1)Vlan 10 2) In vlan 10 3) Ip add [IP] [子网掩码] 6. 交换机端口与Mac绑定和过滤 1) //进入串口 2) sw mode trunk sw port-security 3) sw port-security mac-address [MAC] mac-address-table static address [MAC] vlan-ID interface ethernet [要绑定的端口] 7. 生成树实验 1) spanning-tree 8. 交换机链路聚合 1) Int aggregateport 1 2) sw mode trunk 3) //进入串口 4) port-group 1 9. 交换机端口镜像 1) monitor session 1 source interface fastEthernet 0/2 both //被镜像的 2) monitor session 1 destination interface fastEthernet 0/3 //镜像端口 10. 多层交换机静态路由实验 1) ip route [存在的IP段] [子网掩码] [下一跳IP] 11. RIP动态路由 1) router rip

神州数码交换机配置

神州数码交换机配置基本命令 交换机基本状态: hostname> ;用户模式 hostname# ;特权模式 hostname(config)# ;全局配置模式 hostname(config-if)# ;接口状态 交换机口令设置: switch>enable ;进入特权模式 switch#config terminal ;进入全局配置模式 switch(config)#hostname ;设置交换机的主机名 switch(config)#enable secret xxx ;设置特权加密口令 switch(config)#enable password xxa ;设置特权非密口令 switch(config)#line console 0 ;进入控制台口 switch(config-line)#line vty 0 4 ;进入虚拟终端 switch(config-line)#login ;允许登录 switch(config-line)#password xx ;设置登录口令xx switch#exit ;返回命令 交换机VLAN设置: switch#vlan database ;进入VLAN设置 switch(vlan)#vlan 2 ;建VLAN 2 switch(vlan)#no vlan 2 ;删vlan 2 switch(config)#int f0/1 ;进入端口1 switch(config-if)#switchport access vlan 2 ;当前端口加入vlan 2 switch(config-if)#switchport mode trunk ;设置为干线 switch(config-if)#switchport trunk allowed vlan 1,2 ;设置允许的vlan switch(config-if)#switchport trunk encap dot1q ;设置vlan 中继switch(config)#vtp domain ;设置发vtp域名 switch(config)#vtp password ;设置发vtp密码 switch(config)#vtp mode server ;设置发vtp模式 switch(config)#vtp mode client ;设置发vtp模式 交换机设置IP地址: switch(config)#interface vlan 1 ;进入vlan 1 switch(config-if)#ip address ;设置IP地址 switch(config)#ip default-gateway ;设置默认网关 switch#dir Flash: ;查看闪存 交换机显示命令: switch#write ;保存配置信息 switch#show vtp ;查看vtp配置信息 switch#show run ;查看当前配置信息 switch#show vlan ;查看vlan配置信息 switch#show interface ;查看端口信息 switch#show int f0/0 ;查看指定端口信息 完了最最要的一步。要记得保存设置俄,

神州数码大型企业网络防火墙解决方案

神州数码大型企业网络防火墙解决方案 神州数码大型企业网络防火墙整体解决方案,在大型企业网络中心采用神州数码DCFW-1800E防火墙以满足网络中心对防火墙高性能、高流量、高安全性的需求,在各分支机构和分公司采用神州数码DCFW-1800S 防火墙在满足需要的基础上为用户节约投资成本。另一方面,神州数码DCFW-1800系列防火墙还内置了VPN 功能,可以实现利用Internet构建企业的虚拟专用网络。 返回页首 防火墙VPN解决方案 作为增值模块,神州数码DCFW-1800防火墙内置了VPN模块支持,既可以利用因特网(Internet)IPSEC 通道为用户提供具有保密性,安全性,低成本,配置简单等特性的虚拟专网服务,也可以为移动的用户提供一个安全访问公司内部资源的途径,通过对PPTP协议的支持,用户可以从外部虚拟拨号,从而进入公司内部网络。神州数码DCFW-1800系列防火墙的虚拟专网具备以下特性: - 标准的IPSEC,IKE与PPTP协议 - 支持Gateway-to-Gateway网关至网关模式虚拟专网 - 支持VPN的星形(star)连接方式

- VPN隧道的NAT穿越 - 支持IP与非IP协议通过VPN - 支持手工密钥,预共享密钥与X.509 V3数字证书,PKI体系支持 - IPSEC安全策略的灵活启用:管理员可以根据自己的实际需要,手工禁止和启用单条IPSEC安全策略,也为用户提供了更大的方便。 - 支持密钥生存周期可定制 - 支持完美前项保密 - 支持多种加密与认证算法: - 加密算法:DES, 3DES,AES,CAST,BLF,PAP,CHAP - 认证算法:SHA, MD5, Rmd160 - 支持Client-to-Gateway移动用户模式虚拟专网 - 支持PPTP定制:管理员可以根据自己的实际需要,手工禁止和启用PPTP。 返回页首

神州数码交换机telnet

配置Telnet方式登录 一、组网需求 1. PC通过Telnet登录交换机并对其进行管理。 2. 分别应用本地认证和Radius+本地认证方式对用户名密码进行验证。 3. 只允许192.168.1.2/24这个IP或只允许192.168.1.0/24这个IP地址段的PC进行Telnet 登录。 二、组网图 PC机和交换机之间保证路由可达,PC机可以Ping通交换机的管理地址。 三、配置步骤 1. 登录到交换机,配置本地的用户名密码 switch>enable switch#config terminal switch(config)#username test privilege15password0 test 2. 配置Telnet用户登录时的验证方式(本地验证或 Radius+本地验证) 本地验证 switch(config)#authentication line vty login local Radius + 本地验证 switch(config)#authentication line vty login radius local 注:开启Radius + 本地验证,需要根据现场情况配置Radius-Server参数,具体配置见本手册“安全认证功能配置”部分的描述。 3. 配置允许Telnet管理交换机的地址限制(单独IP 或 IP地址段) 限制单个IP允许Telnet登录交换机 switch(config)#authentication securityip192.168.1.2 限制允许IP地址段Telnet登录交换机 switch(config)#access-list1permit192.168.1.0 0.0.0.255 switch(config)#authentication ip access-class 1 in 四、配置关键点 1. 对于三层交换机,可以有多个三层虚接口,它的管理Vlan可以是任意一个具有三层接口

神州数码交换机DCRS-5650-28和路由器DCR-2626命令总结

神州数码交换机DCRS-5650-28和路由器DCR-2626命令总结 神州数码交换机DCRS-5650-28命令总结 1. 交换机恢复出厂设置及其基本配置. 1) //进入特权模式 2) del config.text 2. Telnet方式管理交换机. 1) //进入全局配置模式 2) enable password 0 [密码] 3) Line 0 4 4) Password 0 [密码] 5) Login 3. 交换机文件备份、升级、还原。 1) rgnos.bin系统文件 2) config.text配置文件 4. Enable密码丢失的解决办法 1) 重启 2) CTRL+C 3) 选择 4 (file) 4) 1 (移除) 5) Config.text 6) Laod (重启) 5. 交换机Vlan的划分 1) Vlan 10 2) In vlan 10 3) Ip add [IP] [子网掩码] 6. 交换机端口与Mac绑定和过滤 1) //进入串口 2) sw mode trunk 3) sw port-security mac-address [MAC] IP-address [IP 7. 生成树实验 1) spanning-tree 8. 交换机链路聚合 1) Int aggregateport 1 2) sw mode trunk 3) //进入串口 4) port-group 1 9. 交换机端口镜像 1) monitor session 1 source interface fastEthernet 0/2 both //被镜像的 2) monitor session 1 destination interface fastEthernet 0/3 //镜像端口10. 多层交换机静态路由实验 1) ip route [存在的IP段] [子网掩码] [下一跳IP] 11. RIP动态路由

《神州数码_无线控制器AC6028有线功能命令手册》VLAN和MAC地址命令

目录 第1章VLAN配置 ..................................................................... 1-1 1.1 VLAN配置命令 .................................................................................... 1-1 1.1.1 debug gvrp...............................................................................................1-1 1.1.2 dot1q-tunnel enable................................................................................1-1 1.1.3 dot1q-tunnel selective enable................................................................1-1 1.1.4 dot1q-tunnel selective s-vlan.................................................................1-2 1.1.5 dot1q-tunnel tpid.....................................................................................1-2 1.1.6 gvrp...........................................................................................................1-2 1.1.7 garp timer hold........................................................................................1-2 1.1.8 garp timer join .........................................................................................1-3 1.1.9 garp timer leave.......................................................................................1-3 1.1.10 garp timer leaveall.................................................................................1-3 1.1.11 name .......................................................................................................1-4 1.1.12 private-vlan............................................................................................1-4 1.1.13 private-vlan association.......................................................................1-5 1.1.14 show dot1q-tunnel ................................................................................1-5 1.1.15 show garp ..............................................................................................1-6 1.1.16 show gvrp ..............................................................................................1-6 1.1.17 show vlan...............................................................................................1-6 1.1.18 show vlan-translation...........................................................................1-8 1.1.19 switchport access vlan.........................................................................1-8 1.1.20 switchport dot1q-tunnel.......................................................................1-8 1.1.21 switchport forbidden vlan....................................................................1-9 1.1.22 switchport hybrid allowed vlan ...........................................................1-9 1.1.23 switchport hybrid native vlan..............................................................1-9 1.1.24 switchport interface............................................................................1-10 1.1.25 switchport mode .................................................................................1-10 1.1.26 switchport trunk allowed vlan ........................................................... 1-11 1.1.27 switchport trunk native vlan .............................................................. 1-11 1.1.28 vlan .......................................................................................................1-12 1.1.29 vlan internal.........................................................................................1-12 1.1.30 vlan ingress enable.............................................................................1-13 1.1.31 vlan-translation ...................................................................................1-13 1.1.32 vlan-translation enable.......................................................................1-13 1.1.33 vlan-translation miss drop.................................................................1-14 1.2 动态vlan配置命令............................................................................. 1-14 1.2.1 dynamic-vlan mac-vlan prefer.............................................................1-14 1.2.2 dynamic-vlan subnet-vlan prefer ........................................................1-15

神州数码 交换机 Vlan配置命令

VLAN配置命令 8.2.2.1 vlan 命令:vlan [vlan-id] no vlan [vlan-id] 功能:创建vlan 并且进入vlan 配置模式,在vlan 模式中,用户可以配置vlan 名称和为该vlan 分配交换机端口;本命令的no 操作为删除指定的vlan。 参数:[vlan-id]为要创建/删除的vlan 的vid,取值范围为1~4094。 命令模式:全局配置模式 缺省情况:交换机缺省只有vlan1。 使用指南:vlan1 为交换机的缺省vlan,用户不能配置和删除vlan1。允许配置vlan的总共数量为4094 个。另需要提醒的是不能使用本命令删除通过gvrp 学习到的动态vlan。 举例:创建vlan100,并且进入vlan100 的配置模式。 switch (config)#vlan 100 switch (config-vlan100)# 8.2.2.2 name 命令:name [vlan-name] no name 功能:为vlan 指定名称,vlan 的名称是对该vlan 一个描述性字符串;本命令的no 操作为删除vlan 的名称。 参数:[vlan-name]为指定的vlan 名称字符串。 命令模式:vlan 配置模式 缺省情况:vlan 缺省名称为vlanxxx,其中xxx 为vid。 使用指南:交换机提供为不同的vlan 指定名称的功能,有助于用户记忆vlan,方便管理。 举例:为vlan100 指定名称为testvlan。

switch (config-vlan100)#name testvlan 8.2.2.3 switchport access vlan 命令:switchport access vlan [vlan-id] no switchport access vlan 功能:将当前access 端口加入到指定vlan;本命令no 操作为将当前端口从vlan 里删除。 参数:[vlan-id]为当前端口要加入的vlan vid,取值范围为1~4094。 命令模式:接口配置模式 缺省情况:所有端口默认属于vlan1。 使用指南:只有属于access mode 的端口才能加入到指定的vlan 中,并且access 端口同时只能加入到一个vlan 里去。 举例:设置某access 端口加入vlan100。 switch (config)#interface ethernet 0/0/8 switch (config-ethernet0/0/8)#switchport mode access switch (config-ethernet0/0/8)#switchport access vlan 100 switch (config-ethernet0/0/8)#exit 8.2.2.4 switchport interface 命令:switchport interface [interface-list] no switchport interface [interface-list] 功能:给vlan 分配以太网端口的命令;本命令的no 操作为删除指定vlan 内的一个或一组端口。 参数:[interface-list] 要添加或者删除的端口的列表,支持”;””-”,如:ethernet 0/0/1;2;5或ethernet 0/0/ 1-6;8。 命令模式:vlan 配置模式 缺省情况:新建立的vlan 缺省不包含任何端口。 使用指南:access 端口为普通端口,可以加入vlan,但同时只允许加入一个vlan。

神州数码交换机配置命令

交换机基本状态: hostname> ;用户模式 hostname# ;特权模式 hostname(config)# ;全局配置模式 hostname(config-if)# ;接口状态 交换机口令设置: switch>enable ;进入特权模式 switch#config terminal ;进入全局配置模式 switch(config)#hostname ;设置交换机的主机名 switch(config)#enable secret xxx ;设置特权加密口令 switch(config)#enable password xxa ;设置特权非密口令 switch(config)#line console 0 ;进入控制台口 switch(config-line)#line vty 0 4 ;进入虚拟终端 switch(config-line)#login ;允许登录 switch(config-line)#password xx ;设置登录口令xx switch#exit ;返回命令 交换机VLAN设置: switch#vlan database ;进入VLAN设置 switch(vlan)#vlan 2 ;建VLAN 2 switch(vlan)#no vlan 2 ;删vlan 2 switch(config)#int f0/1 ;进入端口1 switch(config-if)#switchport access vlan 2 ;当前端口加入vlan 2 switch(config-if)#switchport mode trunk ;设置为干线 switch(config-if)#switchport trunk allowed vlan 1,2 ;设置允许的vlan switch(config-if)#switchport trunk encap dot1q ;设置vlan 中继switch(config)#vtp domain ;设置发vtp域名 switch(config)#vtp password ;设置发vtp密码 switch(config)#vtp mode server ;设置发vtp模式 switch(config)#vtp mode client ;设置发vtp模式 交换机设置IP地址: switch(config)#interface vlan 1 ;进入vlan 1 switch(config-if)#ip address ;设置IP地址 switch(config)#ip default-gateway ;设置默认网关 switch#dir Flash: ;查看闪存 交换机显示命令: switch#write ;保存配置信息 switch#show vtp ;查看vtp配置信息

相关文档
最新文档