数字图像处理 外文翻译 外文文献 英文文献 数字图像处理与边缘检测

数字图像处理 外文翻译 外文文献 英文文献 数字图像处理与边缘检测
数字图像处理 外文翻译 外文文献 英文文献 数字图像处理与边缘检测

Digital Image Processing and Edge Detection

Digital Image Processing

Interest in digital image processing methods stems from two principal applica- tion areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for au- tonomous machine perception.

An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital image.

Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spec- trum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultra- sound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications.

There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vi- sion, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yields a single number)

would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in be- tween image processing and computer vision.

There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, and highlevel processes. Low-level processes involve primitive opera- tions such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). Finally, higherlevel processing involves “making sense” of an ensemble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.

Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify these concepts,

consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting (segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making sense.” As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value.

The areas of application of digital image processing are so varied that some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we discuss briefly how images are generated in these various categories and the areas in which they are applied.

Images based on radiation from the EM spectrum are the most familiar, es- pecially images in the X-ray and visual bands of the spectrum. Electromagnet- ic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectral bands are grouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rather transition smoothly from one to the other.

Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.

Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contr ast of an image because “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good” enhancement result.

Color image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an image.

Wavelets are the foundation for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions.

Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmi it.Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard.

Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.

Segmentation procedures partition an image into its constituent parts or objects.

In general, autonomous segmentation is one of the most difficult tasks in digital image

processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.

Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the bound- ary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for trans- forming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.

Recognition is the process that assigns a label (e.g., “vehicle”) to an object based on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects.

So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing modules in Fig2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as sim- ple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also

can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in con- nection with change-detection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig2 above by the use of double-headed arrows between the processing modules and the knowledge base, as op- posed to single-headed arrows linking the processing modules.

Edge detection

Edge detection is a terminology in image processing and computer vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a digital image at which the image brightness changes sharply or more formally has discontinuities.Although point and line detection certainly are important in any discussion on segmentation,edge dectection is by far the most common approach for detecting meaningful discounties in gray level.

Although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects:1.focal blur caused by a finite depth-of-field and finite point spread function; 2.penumbral blur caused by shadows created by light sources of non-zero radius; 3.shading at a smooth object edge; 4.local specularities or interreflections in the vicinity of object edges.

A typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a line (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line.

To illustrate why edge detection is not a trivial task, let us consider the problem of detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels.

If if the intensity differences between the adjacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several edges.Hence, to firmly state a specific threshold on how large the intensity change between two neighbouring pixels must be for us to say that there should be an edge between these pixels is not always a simple problem. Indeed, this is one of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple and the illumination conditions can be well controlled.

There are many methods for edge detection, but most of them can be grouped into two categories,search-based and zero-crossing based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the Laplacian or the zero-crossings of a non-linear differential expression, as will be described in the section on differential edge detection following below. As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also noise reduction).

The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions.

Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise, and also to picking out irrelevant features from the image.

Conversely a high threshold may miss subtle edges, or result in fragmented edges.

If the edge thresholding is applied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradient magnitude in the estimated gradient direction.

A commonly used approach to handle the problem of appropriate thresholds for thresholding is by using thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholding parameters, and suitable thresholding values may vary over the image.

Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the rate of change in the intensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima in the gradient.

We can come to a conclusion that,to be classified as a meaningful edge point,the transition in gray level associated with that point has to be significantly stronger than the background at that point.Since we are dealing with local computations,the method of choice to determine whether a value is “significant” or not id to use a threshold.Thus we define a point in an image as being as being an edge point if its two-dimensional first-order derivative is greater than a specified criterion of connectedness is by definition an edge.The term edge

segment generally is used if the edge is short in relation to the dimensions of the image.A key problem in segmentation is to assemble edge segments into longer edges.An alternate definition if we elect to use the second-derivative is simply to define the edge ponits in an image as the zero crossings of its second derivative.The definition of an edge in this case is the same as above.It is important to note that these definitions do not guarantee success in finding edge in an image.They simply give us a formalism to look for them.First-order derivatives in an image are computed using the gradient.Second-order derivatives are obtained using the Laplacian.

数字图像处理与边缘检测

数字图像处理

数字图像处理方法的研究源于两个主要应用领域:其一是为了便于人们分析而对图像信息进行改进:其二是为使机器自动理解而对图像数据进行存储、传输及显示。

一幅图像可定义为一个二维函数f(x,y),这里x和y是空间坐标,而在任何一对空间坐标(x,y)上的幅值f 称为该点图像的强度或灰度。当x,y和幅值f为有限的、离散的数值时,称该图像为数字图像。数字图像处理是指借用数字计算机处理数字图像,值得提及的是数字图像是由有限的元素组成的,每一个元素都有一个特定的位置和幅值,这些元素称为图像元素、画面元素或像素。像素是广泛用于表示数字图像元素的词汇。

视觉是人类最高级的感知器官,所以,毫无疑问图像在人类感知中扮演着最重要的角色。然而,人类感知只限于电磁波谱的视觉波段,成像机器则可覆盖几乎全部电磁波谱,从伽马射线到无线电波。它们可以对非人类习惯的那些图像源进行加工,这些图像源包括超声波、电子显微镜及计算机产生的图像。因此,数字图像处理涉及各种各样的应用领域。

图像处理涉及的范畴或其他相关领域(例如,图像分析和计算机视觉)的界定在初创人之间并没有一致的看法。有时用处理的输入和输出内容都是图像这一特点来界定图像处理的范围。我们认为这一定义仅是人为界定和限制。例如,在这个定义下,甚至最普通的计算一幅图像灰度平均值的工作都不能算做是图像处理。另一方面,有些领域(如计算机视觉)研究的最高目标是用计算机去模拟人类视觉,包括理解和推理并根据视觉输入采取行动等。这一领域本身是人工智能的分支,其目的是模仿人类智能。人工智能领域处在其发展过程中的初期阶段,它的发展比预期的要慢的多,图像分析(也称为图像理解)领域则处在图像处理和计算机视觉两个学科之间。

从图像处理到计算机视觉这个连续的统一体内并没有明确的界线。然而,在这个连续的统一体中可以考虑三种典型的计算处理(即低级、中级和高级处理)来区分其中的各个学科。

低级处理涉及初级操作,如降低噪声的图像预处理,对比度增强和图像尖锐化。低级处理是以输入、输出都是图像为特点的处理。中级处理涉及分割(把图像分为不同区域或目标物)以及缩减对目标物的描述,以使其更适合计算机处理及对不同目标的分类(识别)。中级图像处理是以输入为图像,但输出是从这些图像中提取的特征(如边缘、轮廓及不同物体的标识等)为特点的。最后,高级处理涉及在图像分析中被识别物体的总体理解,以及执行与视觉相关的识别函数(处在连续统一体边缘)等。

根据上述讨论,我们看到,图像处理和图像分析两个领域合乎逻辑的重叠区域是图像中特定区域或物体的识别这一领域。这样,在研究中,我们界定数字图像处理包括输入和输出均是图像的处理,同时也包括从图像中提取特征及识别特定物体的处理。举一个简单的文本自动分析方面的例子来具体说明这一概念。在自动分析文本时首先获取一幅包含文本的图像,对该图像进行预处理,提取(分割)字符,然后以适合计算机处理的形式描述这些字符,最后识别这些字符,而所有这些操作都在本文界定的数字图像处理的范围内。理解一页的内容可能要根据理解的复杂度从图像分析或计算机视觉领域考虑问题。这样,我们定义的数字图像处理的概念将在有特殊社会和经济价值的领域内通用。

数字图像处理的应用领域多种多样,所以文本在内容组织上尽量达到该技术应用领域的广度。阐述数字图像处理应用范围最简单的一种方法是根据信息源来分类(如可见光、X射线,等等)。在今天的应用中,最主要的图像源是电磁能谱,其他主要的能源包括声波、超声波和电子(以用于电子显微镜方法的电子束形式)。建模和可视化应用中的合成图像由计算机产生。

建立在电磁波谱辐射基础上的图像是最熟悉的,特别是X射线和可见光谱图像。电磁波可定义为以各种波长传播的正弦波,或者认为是一种粒子流,每个粒子包含一定(一束)能量,每束能量成为一个光子。如果光谱波段根据光谱能量进行分组,我们会得到下图1所示的伽马射线(最高能量)到无线电波(最低能量)的光谱。如图所示的加底纹的条带表达了这样一个事实,即电磁波谱的各波段间并没有明确的界线,而是由一个波段平滑地过渡到另一个波段。

图像获取是第一步处理。注意到获取与给出一幅数字形式的图像一样简单。通常,图像获取包括如设置比例尺等预处理。

图像增强是数字图像处理最简单和最有吸引力的领域。基本上,增强技术后面的思路是显现那些被模糊了的细节,或简单地突出一幅图像中感兴趣的特征。一个图像增强的例子是增强图像的对比度,使其看起来好一些。应记住,增强是图像处理中非常主观的领域,这一点很重要。

图像复原也是改进图像外貌的一个处理领域。然而,不像增强,图像增强是主观的,而图像复原是客观的。在某种意义上说,复原技术倾向于以图像退化的数学或概率模型为基础。另一方面,增强以怎样构成好的增强效果这种人的主观偏爱为基础。

彩色图像处理已经成为一个重要领域,因为基于互联网的图像处理应用在不断增长。就使得在彩色模型、数字域的彩色处理方面涵盖了大量基本概念。在后续发展,彩

色还是图像中感兴趣特征被提取的基础。

小波是在各种分辨率下描述图像的基础。特别是在应用中,这些理论被用于图像数据压缩及金字塔描述方法。在这里,图像被成功地细分为较小的区域。

压缩,正如其名称所指的意思,所涉及的技术是减少图像的存储量,或者在传输图像时降低频带。虽然存储技术在过去的十年内有了很大改进,但对传输能力我们还不能这样说,尤其在互联网上更是如此,互联网是以大量的图片内容为特征的。图像压缩技术对应的图像文件扩展名对大多数计算机用户是很熟悉的(也许没注意),如JPG文件扩展名用于JPEG(联合图片专家组)图像压缩标准。

形态学处理设计提取图像元素的工具,它在表现和描述形状方面非常有用。这一章的材料将从输出图像处理到输出图像特征处理的转换开始。

分割过程将一幅图像划分为组成部分或目标物。通常,自主分割是数字图像处理中最为困难的任务之一。复杂的分割过程导致成功解决要求物体被分别识别出来的成像问题需要大量处理工作。另一方面,不健壮且不稳定的分割算法几乎总是会导致最终失败。通常,分割越准确,识别越成功。

表示和描述几乎总是跟随在分割步骤的输后边,通常这一输出是未加工的数据,其构成不是区域的边缘(区分一个图像区域和另一个区域的像素集)就是其区域本身的所有点。无论哪种情况,把数据转换成适合计算机处理的形式都是必要的。首先,必须确定数据是应该被表现为边界还是整个区域。当注意的焦点是外部形状特性(如拐角和曲线)时,则边界表示是合适的。当注意的焦点是内部特性(如纹理或骨骼形状)时,则区域表示是合适的。则某些应用中,这些表示方法是互补的。选择一种表现方式仅是解决把原始数据转换为适合计算机后续处理的形式的一部分。为了描述数据以使感兴趣的特征更明显,还必须确定一种方法。描述也叫特征选择,涉及提取特征,该特征是某些感兴趣的定量信息或是区分一组目标与其他目标的基础。

识别是基于目标的描述给目标赋以符号的过程。如上文详细讨论的那样,我们用识别个别目标方法的开发推出数字图像处理的覆盖范围。

到目前为止,还没有谈到上面图2中关于先验知识及知识库与处理模块之间的交互这部分内容。关于问题域的知识以知识库的形式被编码装入一个图像处理系统。这一知

识可能如图像细节区域那样简单,在这里,感兴趣的信息被定位,这样,限制性的搜索就被引导到寻找的信息处。知识库也可能相当复杂,如材料检测问题中所有主要缺陷的相关列表或者图像数据库(该库包含变化检测应用相关区域的高分辨率卫星图像)。除了引导每一个处理模块的操作,知识库还要控制模块间的交互。这一特性上面图2中的处理模块和知识库间用双箭头表示。相反单头箭头连接处理模块。

边缘检测

边缘检测是图像处理和计算机视觉中的术语,尤其在特征检测和特征抽取领域,是一种用来识别数字图像亮度骤变点即不连续点的算法。尽管在任何关于分割的讨论中,点和线检测都是很重要的,但是边缘检测对于灰度级间断的检测是最为普遍的检测方法。

虽然某些文献提过理想的边缘检测步骤,但自然界图像的边缘并不总是理想的阶梯边缘。相反,它们通常受到一个或多个下面所列因素的影响:1.有限场景深度带来的聚焦模糊;2.非零半径光源产生的阴影带来的半影模糊;3.光滑物体边缘的阴影;4.物体边缘附近的局部镜面反射或者漫反射。

一个典型的边界可能是(例如)一块红色和一块黄色之间的边界;与之相反的是边线,可能是在另外一种不变的背景上的少数不同颜色的点。在边线的每一边都有一个边缘。

在对数字图像的处理中,边缘检测是一项非常重要的工作。如果将边缘认为是一定数量点亮度发生变化的地方,那么边缘检测大体上就是计算这个亮度变化的导数。为简化起见,我们可以先在一维空间分析边缘检测。在这个例子中,我们的数据是一行不同点亮度的数据。例如,在下面的1维数据中我们可以直观地说在第4与第5个点之间有一个边界:

如果光强度差别比第四个和第五个点之间小,或者说相邻的像素点之间光强度差更

外文翻译---基于模糊逻辑技术图像上边缘检测

译文二: 1基于模糊逻辑技术图像上边缘检测[2] 摘要:模糊技术是经营者为了模拟在数学水平的代偿行为过程的决策或主观评价而引入的。下面介绍经营商已经完成了的计算机视觉应用。本文提出了一种基于模糊逻辑推理战略为基础的新方法,它被建议使用在没有确定阈值的数字图像边缘检测上。这种方法首先将用3?3的浮点二进制矩阵将图像分割成几个区域。边缘像素被映射到一个属性值与彼此不同的范围。该方法的鲁棒性所得到的不同拍摄图像将与线性Sobel运算所得到的图像相比较。并且该方法给出了直线的线条平滑度、平直度和弧形线条的良好弧度这些永久的效果。同时角位可以更清晰并且可以更容易的定义。关键词:模糊逻辑,边缘检测,图像处理,电脑视觉,机械的部位,测量 1.引言 在过去的几十年里,对计算机视觉系统的兴趣,研究和发展已经增长了不少。如今,它们出现在各个生活领域,从停车场,街道和商场各角落的监测系统到主要食品生产的分类和质量控制系统。因此,引进自动化的视觉检测和测量系统是有必要的,特别是二维机械对象[1,8]。部分原因是由于那些每天产生的数字图像大幅度的增加(比如,从X光片到卫星影像),并且对于这样图片的自动处理有增加的需求[9,10,11]。因此,现在的许多应用例如对医学图像进行计算机辅助诊断,将遥感图像分割和分类成土地类别(比如,对麦田,非法大麻种植园的鉴定,以及对作物生长的估计判断),光学字符识别,闭环控制,基于目录检索的多媒体应用,电影产业上的图像处理,汽车车牌的详细记录的鉴定,以及许多工业检测任务(比如,纺织品,钢材,平板玻璃等的缺陷检测)。历史上的许多数据已经被生成图像,以帮助人们分析(相比较于数字表之类的,图像显然容易理解多了)[12]。所以这鼓励了数字分析技术在数据处理方面的使用。此外,由于人类善于理解图像,基于图像的分析法在算法发展上提供了一些帮助(比如,它鼓励几何分析),并且也有助于非正式确认的结果。虽然计算机视觉可以被总结为一个自动(或半自动)分析图像的系统,一些变化也是可能的[9,13]。这些图像可以来自超出正常灰度和色彩的照片,例如红外光,X射线,以及新一代的高光谱 [1]Abdallah A. Alshennawy, A yman A. Aly. Edge Detection in Digital Images Using Fuzzy Logic Technique[]J. World Academy of Science, Engineering and Technology, 2009, 51:178-186.

英文文献翻译

中等分辨率制备分离的 快速色谱技术 W. Clark Still,* Michael K a h n , and Abhijit Mitra Departm(7nt o/ Chemistry, Columbia Uniuersity,1Veu York, Neu; York 10027 ReceiLied January 26, 1978 我们希望找到一种简单的吸附色谱技术用于有机化合物的常规净化。这种技术是适于传统的有机物大规模制备分离,该技术需使用长柱色谱法。尽管这种技术得到的效果非常好,但是其需要消耗大量的时间,并且由于频带拖尾经常出现低复原率。当分离的样本剂量大于1或者2g时,这些问题显得更加突出。近年来,几种制备系统已经进行了改进,能将分离时间减少到1-3h,并允许各成分的分辨率ΔR f≥(使用薄层色谱分析进行分析)。在这些方法中,在我们的实验室中,媒介压力色谱法1和短柱色谱法2是最成功的。最近,我们发现一种可以将分离速度大幅度提升的技术,可用于反应产物的常规提纯,我们将这种技术称为急骤色谱法。虽然这种技术的分辨率只是中等(ΔR f≥),而且构建这个系统花费非常低,并且能在10-15min内分离重量在的样本。4 急骤色谱法是以空气压力驱动的混合介质压力以及短柱色谱法为基础,专门针对快速分离,介质压力以及短柱色谱已经进行了优化。优化实验是在一组标准条件5下进行的,优化实验使用苯甲醇作为样本,放在一个20mm*5in.的硅胶柱60内,使用Tracor 970紫外检测器监测圆柱的输出。分辨率通过持续时间(r)和峰宽(w,w/2)的比率进行测定的(Figure 1),结果如图2-4所示,图2-4分别放映分辨率随着硅胶颗粒大小、洗脱液流速和样本大小的变化。

英文文献1 翻译

目录 1.理论............................................... - 2 - 2.实施............................................... - 3 - 3. 范例.............................................. - 4 - 4.变化和扩展......................................... - 6 - 4.1 利用梯度方向,以减少参数...................... - 6 - 4.2 Hough变换的内核............................... - 6 - 4.3Hough曲线变换与广义Hough变换.................. - 6 - 4.4 三维物体检测(平面和圆柱).................... - 6 - 4.5 基于加权特征.................................. - 7 - 4.6 选取的参数空间................................ - 7 - 4.6.1 算法实现一种高效椭圆检测................ - 8 - 5.局限性............................................. - 8 - 6. 参见.............................................. - 8 - 参考文献............................................. - 9 - 附件: ............................................... - 10 -

毕业论文外文翻译-图像分割

图像分割 前一章的资料使我们所研究的图像处理方法开始发生了转变。从输人输出均为图像的处理方法转变为输人为图像而输出为从这些图像中提取出来的属性的处理方法〔这方面在1.1节中定义过)。图像分割是这一方向的另一主要步骤。 分割将图像细分为构成它的子区域或对象。分割的程度取决于要解决的问题。就是说当感兴趣的对象已经被分离出来时就停止分割。例如,在电子元件的自动检测方面,我们关注的是分析产品的图像,检测是否存在特定的异常状态,比如,缺失的元件或断裂的连接线路。超过识别这此元件所需的分割是没有意义的。 异常图像的分割是图像处理中最困难的任务之一。精确的分割决定着计算分析过程的成败。因此,应该特别的关注分割的稳定性。在某些情况下,比如工业检测应用,至少有可能对环境进行适度控制的检测。有经验的图像处理系统设计师总是将相当大的注意力放在这类可能性上。在其他应用方面,比如自动目标采集,系统设计者无法对环境进行控制。所以,通常的方法是将注意力集中于传感器类型的选择上,这样可以增强获取所关注对象的能力,从而减少图像无关细节的影响。一个很好的例子就是,军方利用红外线图像发现有很强热信号的目标,比如移动中的装备和部队。 图像分割算法一般是基于亮度值的不连续性和相似性两个基本特性之一。第一类性质的应用途径是基于亮度的不连续变化分割图像,比如图像的边缘。第二类的主要应用途径是依据事先制定的准则将图像分割为相似的区域,门限处理、区域生长、区域分离和聚合都是这类方法的实例。 本章中,我们将对刚刚提到的两类特性各讨论一些方法。我们先从适合于检测灰度级的不连续性的方法展开,如点、线和边缘。特别是边缘检测近年来已经成为分割算法的主题。除了边缘检测本身,我们还会讨论一些连接边缘线段和把边缘“组装”为边界的方法。关于边缘检测的讨论将在介绍了各种门限处理技术之后进行。门限处理也是一种人们普遍关注的用于分割处理的基础性方法,特别是在速度因素占重要地位的应用中。关于门限处理的讨论将在几种面向区域的分割方法展开的讨论之后进行。之后,我们将讨论一种称为分水岭分割法的形态学

计算机网络-外文文献-外文翻译-英文文献-新技术的计算机网络

New technique of the computer network Abstract The 21 century is an ages of the information economy, being the computer network technique of representative techniques this ages, will be at very fast speed develop soon in continuously creatively, and will go deep into the people's work, life and study. Therefore, control this technique and then seem to be more to deliver the importance. Now I mainly introduce the new technique of a few networks in actuality live of application. keywords Internet Network System Digital Certificates Grid Storage 1. Foreword Internet turns 36, still a work in progress Thirty-six years after computer scientists at UCLA linked two bulky computers using a 15-foot gray cable, testing a new way for exchanging data over networks, what would ultimately become the Internet remains a work in progress. University researchers are experimenting with ways to increase its capacity and speed. Programmers are trying to imbue Web pages with intelligence. And work is underway to re-engineer the network to reduce Spam (junk mail) and security troubles. All the while threats loom: Critics warn that commercial, legal and political pressures could hinder the types of innovations that made the Internet what it is today. Stephen Crocker and Vinton Cerf were among the graduate students who joined UCLA professor Len Klein rock in an engineering lab on Sept. 2, 1969, as bits of meaningless test data flowed silently between the two computers. By January, three other "nodes" joined the fledgling network.

机器视觉技术发展现状文献综述

机器视觉技术发展现状 人类认识外界信息的80%来自于视觉,而机器视觉就是用机器代替人眼来做 测量和判断,机器视觉的最终目标就是使计算机像人一样,通过视觉观察和理解 世界,具有自主适应环境的能力。作为一个新兴学科,同时也是一个交叉学科,取“信息”的人工智能系统,其特点是可提高生产的柔性和自动化程度。目前机器视觉技术已经在很多工业制造领域得到了应用,并逐渐进入我们的日常生活。 机器视觉是通过对相关的理论和技术进行研究,从而建立由图像或多维数据中获机器视觉简介 机器视觉就是用机器代替人眼来做测量和判断。机器视觉主要利用计算机来模拟人的视觉功能,再现于人类视觉有关的某些智能行为,从客观事物的图像中提取信息进行处理,并加以理解,最终用于实际检测和控制。机器视觉是一项综合技术,其包括数字处理、机械工程技术、控制、光源照明技术、光学成像、传感器技术、模拟与数字视频技术、计算机软硬件技术和人机接口技术等,这些技术相互协调才能构成一个完整的工业机器视觉系统[1]。 机器视觉强调实用性,要能适应工业现场恶劣的环境,并要有合理的性价比、通用的通讯接口、较高的容错能力和安全性、较强的通用性和可移植性。其更强调的是实时性,要求高速度和高精度,且具有非接触性、实时性、自动化和智能 高等优点,有着广泛的应用前景[1]。 一个典型的工业机器人视觉应用系统包括光源、光学成像系统、图像捕捉系统、图像采集与数字化模块、智能图像处理与决策模块以及控制执行模块。通过 CCD或CMOS摄像机将被测目标转换为图像信号,然后通过A/D转换成数字信号传送给专用的图像处理系统,并根据像素分布、亮度和颜色等信息,将其转换成数字化信息。图像系统对这些信号进行各种运算来抽取目标的特征,如面积、 数量、位置和长度等,进而根据判别的结果来控制现场的设备动作[1]。 机器视觉一般都包括下面四个过程:

数字图像处理

数字图像处理(MATLAB版) 实验指导书 (试用版) 本实验指导书配合教材和课堂笔记中的例题使用 姚天曙编写 安徽农业大学工学院 2009年4月试行

目录 实验一、数字图像获取和格式转换 2 实验二、图像亮度变换和空间滤波 6 实验三、频域处理7 实验四、图像复原9 实验五、彩色图像处理10 实验六、图像压缩11 实验七、图像分割13 教材与参考文献14

《数字图像处理》实验指导书 实验一、数字图像获取和格式转换 一、实验目的 1掌握使用扫描仪、数码相机、数码摄像级机、电脑摄像头等数字化设备以及计算机获取数字图像的方法; 2修改图像的存储格式;并比较不同压缩格式图像的数据量的大小。 二、实验原理 数字图像获取设备的主要性能指标有x、y方向的分辨率、色彩分辨率(色彩位数)、扫描幅面和接口方式等。各类设备都标明了它的光学分辨率和最大分辨率。分辨率的单位是dpi,dpi是英文Dot Per Inch的缩写,意思是每英寸的像素点数。 扫描仪扫描图像的步骤是:首先将欲扫描的原稿正面朝下铺在扫描仪的玻璃板上,原稿可以是文字稿件或者图纸照片;然后启动扫描仪驱动程序后,安装在扫描仪内部的可移动光源开始扫描原稿。为了均匀照亮稿件,扫描仪光源为长条形,并沿y方向扫过整个原稿;照射到原稿上的光线经反射后穿过一个很窄的缝隙,形成沿x方向的光带,又经过一组反光镜,由光学透镜聚焦并进入分光镜,经过棱镜和红绿蓝三色滤色镜得到的RGB三条彩色光带分别照到各自的CCD上,CCD将RGB光带转变为模拟电子信号,此信号又被A/D变换器转变为数字电子信号。至此,反映原稿图像的光信号转变为计算机能够接受的二进制数字电子信号,最后通过串行或者并行等接口送至计算机。扫描仪每扫一行就得到原稿x方向一行的图像信息,随着沿y方向的移动,在计算机内部逐步形成原稿的全图。扫描仪工作原理见图1.1。

数字信号处理英文文献及翻译

数字信号处理 一、导论 数字信号处理(DSP)是由一系列的数字或符号来表示这些信号的处理的过程的。数字信号处理与模拟信号处理属于信号处理领域。DSP包括子域的音频和语音信号处理,雷达和声纳信号处理,传感器阵列处理,谱估计,统计信号处理,数字图像处理,通信信号处理,生物医学信号处理,地震数据处理等。 由于DSP的目标通常是对连续的真实世界的模拟信号进行测量或滤波,第一步通常是通过使用一个模拟到数字的转换器将信号从模拟信号转化到数字信号。通常,所需的输出信号却是一个模拟输出信号,因此这就需要一个数字到模拟的转换器。即使这个过程比模拟处理更复杂的和而且具有离散值,由于数字信号处理的错误检测和校正不易受噪声影响,它的稳定性使得它优于许多模拟信号处理的应用(虽然不是全部)。 DSP算法一直是运行在标准的计算机,被称为数字信号处理器(DSP)的专用处理器或在专用硬件如特殊应用集成电路(ASIC)。目前有用于数字信号处理的附加技术包括更强大的通用微处理器,现场可编程门阵列(FPGA),数字信号控制器(大多为工业应用,如电机控制)和流处理器和其他相关技术。 在数字信号处理过程中,工程师通常研究数字信号的以下领域:时间域(一维信号),空间域(多维信号),频率域,域和小波域的自相关。他们选择在哪个领域过程中的一个信号,做一个明智的猜测(或通过尝试不同的可能性)作为该域的最佳代表的信号的本质特征。从测量装置对样品序列产生一个时间或空间域表示,而离散傅立叶变换产生的频谱的频率域信息。自相关的定义是互相关的信号本身在不同时间间隔的时间或空间的相关情况。 二、信号采样 随着计算机的应用越来越多地使用,数字信号处理的需要也增加了。为了在计算机上使用一个模拟信号的计算机,它上面必须使用模拟到数字的转换器(ADC)使其数字化。采样通常分两阶段进行,离散化和量化。在离散化阶段,信号的空间被划分成等价类和量化是通过一组有限的具有代表性的信号值来代替信号近似值。 奈奎斯特-香农采样定理指出,如果样本的取样频率大于两倍的信号的最高频率,一个信号可以准确地重建它的样本。在实践中,采样频率往往大大超过所需的带宽的两倍。 数字模拟转换器(DAC)用于将数字信号转化到模拟信号。数字计算机的使用是数字控制系统中的一个关键因素。 三、时间域和空间域 在时间或空间域中最常见的处理方法是对输入信号进行一种称为滤波的操作。滤波通常包括对一些周边样本的输入或输出信号电流采样进行一些改造。现在有各种不同的方法来表征的滤波器,例如: 一个线性滤波器的输入样本的线性变换;其他的过滤器都是“非线性”。线性滤波器满足叠加条件,即如果一个输入不同的信号的加权线性组合,输出的是一个同样加权线性组合所对应的输出信号。

外文翻译---图像的边缘检测

附:英文资料翻译 图像的边缘检测 To image edge examination algorithm research academic report Abstract Digital image processing took a relative quite young discipline, is following the computer technology rapid development, day by day obtains the widespread application.The edge took the image one kind of basic characteristic, in the pattern recognition, the image division, the image intensification as well as the image compression and so on in the domain has a more widespread application.Image edge detection method many and varied, in which based on brightness algorithm, is studies the time to be most long, the theory develops the maturest method, it mainly is through some difference operator, calculates its gradient based on image brightness the change, thus examines the edge, mainly has Robert, Laplacian, Sobel, Canny, operators and so on LOG. First as a whole introduced digital image processing and the edge detection survey, has enumerated several kind of at present commonly used edge detection technology and the algorithm, and selects two kinds to use Visual the C language programming realization, through withdraws the image result to two algorithms the comparison, the research discusses their good and bad points. 对图像边缘检测算法的研究学术报告摘要 数字图像处理作为一门相对比较年轻的学科, 伴随着计算机技术的飞速发展, 日益得到广泛的应用. 边缘作为图像的一种基本特征, 在图像识别,图像分割,图像增强以及图像压缩等的领域中有较为广泛的应用.图像边缘提取的手段多种多样,其中基于亮度的算法,是研究时间最久,理论发展最成熟的方法, 它主要是通过一些差分算子, 由图像的亮度计算其梯度的变化, 从而检测出边缘, 主要有Robert, Laplacian, Sobel, Canny, LOG 等算子. 首先从总体上介绍了数字图像处理及边缘提取的概况, 列举了几种目前常用的边缘提取技术和算法,并选取其中两种使用Visual C++语言编程实现,通过对两种算法所提取图像结果的比较,研究探讨它们的优缺点. First chapter introduction §1.1 image edge examination introduction The image edge is one of image most basic characteristics, often is carrying image majority of informations.But the edge exists in the image irregular structure and in

变电站_外文翻译_外文文献_英文文献_变电站的综合概述

英文翻译 A comprehensive overview of substations Along with the economic development and the modern industry developments of quick rising, the design of the power supply system become more and more completely and system. Because the quickly increase electricity of factories, it also increases seriously to the dependable index of the economic condition, power supply in quantity. Therefore they need the higher and more perfect request to the power supply. Whether Design reasonable, not only affect directly the base investment and circulate the expenses with have the metal depletion in colour metal, but also will reflect the dependable in power supply and the safe in many facts. In a word, it is close with the economic performance and the safety of the people. The substation is an importance part of the electric power system, it is consisted of the electric appliances equipments and the Transmission and the Distribution. It obtains the electric power from the electric power system, through its function of transformation and assign, transport and safety. Then transport the power to every place with safe, dependable, and economical. As an important part of power’s transport and control, the transformer substation must change the mode of the traditional design and control, then can adapt to the modern electric power system, the development of modern industry and the of trend of the society life. Electric power industry is one of the foundations of national industry and national economic development to industry, it is a coal, oil, natural gas, hydropower, nuclear power, wind power and other energy conversion into electrical energy of the secondary energy industry, it for the other departments of the national economy fast and stable development of the provision of adequate power, and its level of development is a reflection of the country's economic development an important indicator of the level. As the power in the industry and the importance of the national economy, electricity transmission and distribution of electric energy used in these areas is an indispensable component.。Therefore, power transmission and distribution is critical. Substation is to enable superior power plant power plants or power after adjustments to the lower load of books is an important part of power transmission. Operation of its functions, the capacity of a direct impact on the size of the lower load power, thereby affecting the industrial production and power consumption.Substation system if a link failure, the system will protect the part of action. May result in power outages and so on, to the production and living a great disadvantage. Therefore, the substation in the electric power system for the protection of electricity reliability,

图像处理文献综述

文献综述 1.1理论背景 数字图像中的边缘检测是图像分割、目标区域的识别、区域形状提取等图像分析领域的重要基础,图像处理和分析的第一步往往就是边缘检测。 物体的边缘是以图像的局部特征不连续的形式出现的,也就是指图像局部亮度变化最显著的部分,例如灰度值的突变、颜色的突变、纹理结构的突变等,同时物体的边缘也是不同区域的分界处。图像边缘有方向和幅度两个特性,通常沿边缘的走向灰度变化平缓,垂直于边缘走向的像素灰度变化剧烈。根据灰度变化的特点,图像边缘可分为阶跃型、房顶型和凸缘型。 1.2、图像边缘检测技术研究的目的和意义 数字图像边缘检测是伴随着计算机发展起来的一门新兴学科,随着计算机硬件、软件的高度发展,数字图像边缘检测也在生活中的各个领域得到了广泛的应用。边缘检测技术是图像边缘检测和计算机视觉等领域最基本的技术,如何快速、精确的提取图像边缘信息一直是国内外研究的热点,然而边缘检测也是图像处理中的一个难题。 首先要研究图像边缘检测,就要先研究图像去噪和图像锐化。前者是为了得到飞更真实的图像,排除外界的干扰,后者则是为我们的边缘检测提供图像特征更加明显的图片,即加大图像特征。两者虽然在图像边缘检测中都有重要地位,但本次研究主要是针对图像边缘检测的研究,我们最终所要达到的目的是为了处理速度更快,图像特征识别更准确。早期的经典算法有边缘算子法、曲面拟合法、模版匹配法、门限化法等。 早在1959年Julez就曾提及边缘检测技术,Roberts则于1965年开始了最早期的系统研究,从此有关边缘检测的理论方法不断涌现并推陈出新。边缘检测最开始都是使用一些经验性的方法,如利用梯度等微分算子或特征模板对图像进行卷积运算,然而由于这些方法普遍存在一些明显的缺陷,导致其检测结果并不

图像采集卡英文文献

英文文献:(4000+) 基于相似性的可视化的图像采集 G.P.阮M.吴霞 感官智能信息系统,阿姆斯特丹大学, Kruislaan403,1098SJ荷兰阿姆斯特丹 电子邮件:fgiangnp,worringg@science.uva.nl 摘要 在很多文献中,很少有内容是基于利用可视化作为探索工具集合的多媒体的检索系统,。然而,在搜寻影像时没有实例,需要探索数据设置。截至目前,大多数可用的系统只显示图像的二维网格形式的随机集合。最近,先进的基于相似技术已被开发用于浏览。然而,他们没有分析可视化视觉大片集合时出现的问题。在本文中,我们明确提出这些问题。开始之前,我们建立了三个总体要求:概述,可见性和数据结构保存。解决方案是为每一个需求提出了建议。最后,系统被提出并给出了实验结果,以证明我们的理论和方法。 1引言 多媒体技术的发展和廉价的数码相机,可用性图像和视频集规模大幅增长。为了管理,探索并通过搜索并且收藏,可视化系统是必不可少的。许多工程已促成了这一有趣的领域[ 18 ] 。在基于内容检索的这一主要问题是系统的自动标注功能之间的语义鸿沟和在集合的概念上的存取条件与用户的要求。提高了系统的性能可从系统的角度,或从用户侧和从这些的组合中进行。在任何方式的集合中可视化是一个重要的元素,因为它是建立在用户之间的联系的最好方式和系统。在文献中,很少有基于内容的多媒体检索系统利用可视化作为探索的工具集合。然而,在搜寻影像时没有从实例入手,设置需要探索数据。截至目前,大多数可用的系统只显示图像的二维网格形式的随机集合。并且浏览是依赖于图像之间的关系。因此,应根据相似性。对于描述,查询,搜索等基本特征或例子是最适当的方式就是可视化浏览。最近,更多先进的技术已被开发用于浏览基于相似性。然而,他们没有分析可视化可视化集合时出现的特殊问题。例如,作为图像集的大小需要的空间是非常大的,从集合随机选择一组图片不能被认为是一个正确的做法。用户使用此选项设置,只能得到数据库里面的能是什么的感觉。在另一方面,显示(即无论大小或分辨率)的限制,不允许任何系统,以显示整个集合。此外,显示所有图像时甚至不给用户提供更多的信息,而且还容易让图像迷失在拥挤的网络图像中。有些系统取得了一个电子,通过展示剩余来缓解这种限制。并整个收集到用户中作为一个点集。然后,每个图像由显示器上的一个点来表示,并且一旦用户选择了一点,他们将得到的实像的可视化。但是从实际的角度看来,这种做法是不容易的,因为用户在看一千多个点。此外,每一个图像都是一个可视对象,因此其总含量多少应对用户是可见的。在本文中,我们提出的所有问题都得到明确。本文的结构如下。在第2节中,我们分析出一些要求用于可视化大图像集合。然后在第3节,为每一个需求得出解决方案。最后,第4所示的实验结果与真实的数据。 2问题分析 在本节中,我们更详细地分析一个可视化视觉大片集合时出现的问题。从为了一个共同的可视化系统存在的一般要求是去NED。在可视化的大集合的RST的问题是,由于其在尺寸和分辨率的限制,以显示他们的设备的有限显示尺寸,这就是所谓的可视空间。同时,该大小集合通常比可视空间的所能承受的能力小大要大得多。其次,由于图像是视觉对象的任何可视化工具的最终目的是要显示图像的内容。由于空间限制,只有一小部分的图像可以在同一时间被显示。随机选择这些图像的肯定不是一个好方法,因为它是不能够显示整个集合的分

科技英语翻译Unit 1—Unit 7

Unit 1 Electronics:Analog and Digital 1.As with series resonance, the greater the resistance in the circuit the lower the Q and, accordingly, the flatter and broader the resonance curve of either line current or circuit impedance. 对于串联谐振,电路中的电阻愈大Q值就愈低,相应地线路电流或电路阻抗的谐振曲线也就愈平、愈宽。 2.A wire carrying a current looks exactly the same and weighs exactly the same as it does when it is not carrying a current. 一根带电的导线其外表与重量都与不带电导线完全一样。 3.Click mouse on the waveform and drag it to change the pulse repetition rate, or directly enter a new value of the period in the provided dialogue box, while keeping the pulse width unchanged. 在波形上点击鼠标并拖动来改变脉冲重复频率,或者在提供的对话框中直接输入新的周期值,而保持脉冲宽度不变。 4.Electronics is the science and the technology of the passage of charged particles in a gas, in a vacuum, or in a semiconductor. Please note that particle motion confined within a metal only is not considered electronics. 电子学是一门有关带电粒子在气体、真空或半导体中运动的科学技术。注意,在本书中粒子运动仅限于金属的情况不属于电子学。 5.Hardware technologies have played vital roles in our ability to use electronic properties to process information, but software and data processing aspects have not developed at the same speed. 硬件技术在我们使用电子特性来处理信息的能力中一直起着重要作用,而软件及数字处理方面却没能与硬件同步发展。 6.However, in a properly designed DC amplifier the effect of transistor parameter variation, other than Ico, may be practically eliminated if the operation point of each stage is adjusted so that it remains in the linear operation range of the transistor as temperature varies. 然而在设计得当的直流放大器中,若调节每一级的工作点使之在温度变化时保持在晶体管线性区,就能在实际上消除Ico以外的晶体管参数变化所造成的影响。

博物馆 外文翻译 外文文献 英文文献

第一篇: 航空博物馆与航空展示公园 巴特罗米耶杰·基谢列夫斯基 飞翔的概念、场所的精神、老机场的建筑---克拉科夫新航空博物馆理性地吸取了这些元素,并将它们整合到一座建筑当中。Rakowice-Czyzyny机场之前的旧飞机修理库为新建筑的平面和高度设定了模数比例。在此基本形态上进一步发展,如同裁切和折叠一架纸飞机,生成了一座巨大的建筑。其三角形机翼是由混凝土制成,却如同风动螺旋桨一样轻盈。这个机翼宽大通透,向各个方向开敞。它们的形态与组织都是依据内部功能来设计的。机翼部分为3个不平衡的平面,使内外景观在不断变化中形成空间的延续性,并且联系了建筑内的视觉焦点和室外的展览区。 新航空展示公园的设计连接了博物馆的8栋建筑和户外展览区,并与历史体验建立联系。从前的视觉轴线与通道得到尊重,旧的道路得到了完善,朝向飞机场和跑道的空间被限定出来。每栋建筑展示了一个主题或是一段飞行史。建筑周围伸展出巨大的平台,为特殊主题的室外展览提供了空间。博物馆容纳了超过150架飞机、引擎、飞行复制品、成套的技术档案和历史图片。这里的特色收藏是飞机起源开始的各种飞行器,如Jatho1903、Grade1909、莱特兄弟1909年的飞机模型和1911年的鸽式单翼机。 The first passage: Museum for aviation and aviation exhibition park Bartiomiej Kislelewski The idea of flying, the spirit of place, the structure of the historic airfield – the new Museum of Aviation in Krakow takes up these references intellectually and synthesizes them into a building. The old hangars of the former airport Rakowice Czyzyny set the modular scale for the footprint and the height of the new building. Developed from this basic shape, as if cut out and folded like a paper airplane, a large structure has been generated, with triangular wings made of concrete and yet as light as a wind-vane propeller. The wings are generously glazed and open in all directions. Their form and arrangement depend on the interior uses. In the floor plans of the wings, the three offset

基于matlab的图像预处理技术研究文献综述

毕业设计文献综述 题目:基于matlab的图像预处理技术研究 专业:电子信息工程 1前言部分 众所周知,MATLAB在数值计算、数据处理、自动控制、图像、信号处理、神经网络、优化计算、模糊逻辑、小波分析等众多领域有着广泛的用途,特别是MATLAB的图像处理和分析工具箱支持索引图像、RGB 图像、灰度图像、二进制图像,并能操作*.bmp、*.jpg、*.tif等多种图像格式文件如。果能灵活地运用MATLAB提供的图像处理分析函数及工具箱,会大大简化具体的编程工作,充分体现在图像处理和分析中的优越性。 图像就是用各种观测系统观测客观世界获得的且可以直接或间接作用与人眼而产生视觉的实体。视觉是人类从大自然中获取信息的最主要的手段。拒统计,在人类获取的信息中,视觉信息约占60%,听觉信息约占20%,其他方式加起来才约占20%。由此可见,视觉信息对人类非常重要。同时,图像又是人类获取视觉信息的主要途径,是人类能体验的最重要、最丰富、信息量最大的信息源。通常,客观事物在空间上都是三维的(3D)的,但是从客观景物获得的图像却是属于二维(2D)平面的。 图像存在方式多种多样,可以是可视的或者非可视的,抽象的或者实际的,适于计算机处理的和不适于计算机处理的。 图像处理它是指将图像信号转换成数字信号并利用计算机对其进行处理的过程。图像处理最早出现于20世纪50年代,当时的电子计算机已经发展到一定水平,人们开始利用计算机来处理图形和图像信息。图像处理作为一门学科大约形成于20世纪60年代初期。早期的图像处理的目的是改善图像的质量,它以人为对象,以改善人的视觉效果为目的。图像处理中,输入的是质量低的图像,输出的是改善质量后的图像,常用的图像处理方法有图像增强、复原、编码、压缩等。首次获得实际成功应用的是美国喷气推进实验室(JPL)。他们对航天探测器徘徊者7号在 1964 年发回的几千张月球照片使用了图像处理技术,如几何校正、灰度变换、去除噪声等方法进行处理,并考虑了太阳位置和月球环境的影响,由计算机成功地绘制出月球表面地图,获得了巨大的成功。随后又对探测飞船发回的近十万张照片进行更为复杂的图像处理,以致获得了月球的地形图、彩色图及全景镶嵌图,获得了非凡的成果,为人类登月创举奠定了坚实的基础,也推动

相关文档
最新文档