0219
Aperture9 處理raw的軟體
Lightroom PC軟體
底片用乳化劑感光,瀝青也可以感光
需要光和化學的反應造成影像
數位相機由CCD接收光源產生電壓差和電流差
RGB各24位元,從明到暗256階,產生一千多萬種顏色
一般的影像檔可以抓到24位元,掃喵機可以處理更高的位元
Photoshop可以處理24位元和48位元
世界上會直接抓48位元的機器很少
Charge-coupled device CCD感光晶片,對光線產生感應的半導體
光耦合半導體
A charge-coupled device (CCD) is an analog shift register, enabling analog signals (electric charges) to be transported through successive stages (capacitors) controlled by a clock signal. Charge coupled devices can be used as a form of memory or for delaying analog, sampled signals. Today, they are most widely used for serializing parallel analog signals, namely in arrays of photoelectric light sensors. This use is so predominant that in common parlance, "CCD" is (erroneously) used as a synonym for a type of image sensor even though, strictly speaking, "CCD" refers solely to the way that the image signal is read out from the chip.
The capacitor perspective is reflective of the history of the development of the CCD and also is indicative of its general mode of operation, with respect to readout, but attempts aimed at optimization of present CCD designs and structures tend towards consideration of the photodiode as the fundamental collecting unit of the CCD. Under the control of an external circuit, each capacitor can transfer its electric charge to one or other of its neighbors. CCDs are used in digital photography and astronomy (particularly in photometry, sensors, medical fluoroscopy, optical and UV spectroscopy and high speed techniques such as lucky imaging).
Complementary metal–oxide–semiconductor (CMOS) (pronounced "see-moss", IPA: /siːmɔːs, ˈsiːmɒs/), is a major class of integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM, and other digital logic circuits. CMOS technology is also used for a wide variety of analog circuits such as image sensors, data converters, and highly integrated transceivers for many types of communication.
CMOS 互補金屬氧化物半導體
CMOS is also sometimes referred to as complementary-symmetry metal–oxide–semiconductor. The words "complementary-symmetry" refer to the fact that the typical digital design style with CMOS uses complementary and symmetrical pairs of p-type and n-type metal oxide semiconductor field effect transistors (MOSFETs) for logic functions.
Two important characteristics of CMOS devices are high noise immunity and low static power consumption. Significant power is only drawn when the transistors in the CMOS device are switching between on and off states. Consequently, CMOS devices do not produce as much waste heat as other forms of logic, for example transistor-transistor logic (TTL). CMOS also allows a high density of logic functions on a chip.
The phrase "metal–oxide–semiconductor" is a reference to the physical structure of certain field-effect transistors, having a metal gate electrode placed on top of an oxide insulator, which in turn is on top of a semiconductor material. Instead of metal, current gate electrodes (including those up to the 65 nanometer technology node) are almost always made from a different material, polysilicon, but the terms MOS and CMOS nevertheless continue to be used for the modern descendants of the original process. Metal gates have made a comeback with the advent of high-k dielectric materials in the CMOS process, as announced by IBM and Intel for the 45 nanometer node and beyond.
底片是整張都是感光體,底片的感光體整個超過CCD和CMOS的規格
800萬象素 有效規格,實際上不只800萬,其他的使用在對比、光的深淺
CCD外圍有一圈塗黑,是作成黑的參考,畫素越高的相機,單位的感光晶體越小
400&100的底片,400的感光粒子顆粒比較粗,因為感光的面積比較大
光感光了乳化銀,光可以將乳化銀還原,而變成黑的銀離子,出現了潛影
數位相機也是一樣,只不過感光是不是光與化學反應,而是光電反應
數位相機拍照以後,產生電壓,電壓有限制1伏特的物理限制
從0伏特到1伏特,產生 10 12 14 16位元(2的10次方、12、14、16次方)階
電壓強弱產生光的階調,色彩由模擬出來的,CCD本身不能分辨顏色
| G | R |
| B | G |
ADC數位類比轉換器
RAW檔是一格一格的色彩組成的影像,每一個相機的RAW檔都不一樣
數位影像拍出來都是糊的,必須要調銳利化,處理影像加速卡
台灣普力爾為全世界最大數位相機代工生產廠,一年生產上億照相模組
JPEG是一個組織、公司,不斷的更新這個格式的相容性
數位相機連續拍100張,最後將出現雜訊,因為溫度升高電壓改變
用底片相機也會造成溫度上升而加速顯影的狀況
RAW可以處理暗房當中10倍的動作處理
135底片24x36
標準鏡頭,焦距約等於底片對角線的長度
我們常見的135相機的幾種鏡頭焦距為58 m m、50 m m、43 m m等規格
120相機的幾種鏡頭焦距為75 m m、 80 m m、90 m m
Full Size 感光底片和相機一樣大
Street price
‧ $210
‧ £145
Body Material Plastic
Sensor
‧ 1/2.5 " Type CCD
‧ 8.0 million effective pixels
Image sizes
‧ 3264 x 2448
‧ 2592 x 1944
‧ 2048 x 1536
‧ 1600 x 1200
‧ 640 x 480
‧ 3264 x 1832
Movie clips
‧ 640 x 480 @ 30fps
‧ 640 x 480 @ 30fps (Long play)
‧ 320 x 240 @ 30fps
‧ 160 x 120 @ 15fps (Compact mode)
File formats ‧ JPEG Exif 2.2
‧ DCF
‧ DPOF 1.1
‧ AVI Motion JPEG with WAVE monaural
Lens
‧ 5.8-34.8mm (35-210mm equiv)
‧ F2.8-4.8
‧ 6x optical zoom
Image stabilization Yes (lens shift-type)
Conversion lenses None
Digital zoom up to 4x
Focus TTL
AF area modes ‧ AiAF (Face Detection / 9-point)
‧ 1-point AF (fixed center/FlexiZone)
AF assist lamp Yes
Focus distance Closest 1cm
Metering ‧ Evaluative (linked to Face Detection AF frame)
‧ Center-weighted average
‧ Spot
ISO sensitivity
‧ Auto
‧ High ISO Auto
‧ ISO 80
‧ ISO 100
‧ ISO 200
‧ ISO 400
‧ ISO 800
‧ ISO 1600
Exposure compensation ‧ +/- 2EV
‧ in 1/3 stop increments
Shutter speed 15-1/2000 sec (With noise reduction for exposures over 1.3 seconds)
Aperture F2.8-4.8
Modes ‧ Auto
‧ Manual
‧ Digital Macro
‧ Color Accent
‧ Color Swap
‧ Stitch Assist
‧ Movie
‧ Special Scene
Scene modes
‧ Portrait
‧ Landscape
‧ Night Snapshot
‧ Kids & Pets
‧ Indoor
‧ Foliage
‧ Snow
‧ Beach
‧ Fireworks
‧ Aquarium
‧ Underwater
‧ Night scene
White balance
‧ Auto
‧ Daylight
‧ Cloudy
‧ Tungsten
‧ Fluorescent
‧ Fluorescent H
‧ Underwater
‧ Custom
Self timer ‧ 2 or 10secs
‧ Custom
Continuous shooting approx 1.3fps until card is full
Image parameters My Colors (My Colors Off, Vivid, Neutral, Sepia, B&W, Positive Film, Lighter Skin Tone, Darker Skin Tone, Vivid Blue, Vivid Green, Vivid Red, Custom Color)
Flash
‧ Auto
‧ Manual Flash on / off
‧ Slow sync
‧ 2nd-curtain
‧ Red-eye reduction
‧ Range: 30cm-3.5m (wide) / 55cm-2.0m (tele)
Viewfinder "Real-image" zoom viewfinder
LCD monitor ‧ 2.5-inch P-Si TFT
‧ 115,000 pixels
Connectivity ‧ USB 2.0 Hi-Speed
‧ AV out
Print compliance ‧ PictBridge
‧ Canon SELPHY Compact Photo Printers and PIXMA Printers supporting PictBridge (ID Photo Print, Movie Print supported on SELPHY CP printers only)
Storage ‧ SD / SDHC / MMC card compatible
‧ 32 MB card supplied
Power ‧ AA batteries
‧ Optional AC adapter kit ACK800
Other features
‧ Optional High Power Flash HF-DC1
‧ Conversion lens adapter LA-DC58G
‧ Wide-angle converter WC-DC58N (requires LA-DC58G)
‧ Tele-converter TC-DC58N (requires LA-DC58G)
‧ Close-up Lens 250D (requires LA-DC58G)
‧ Optional Waterproof Case (WP-DC16)
Weight (No batt) 200 g (7.05 oz)
Dimensions 97.3 x 67.0 x 41.9 mm (3.83 x 2.64 x 1.65 inch)
0226
sRGB 市面上螢幕可以顯示的顏色範圍
人眼可以辨別所有的顏色/可見光/LAB Colour
sRGB 讓影像在網際網路/十年內的電視傳遞時如實地複製色彩
出版使用CMYK,當檔案拍攝時只有 Adobe RGB 和 sRGB 兩種 LAB Colour
一個好的螢幕可以顯示更多顏色 as Adobe RGB
輸出時通常機器只有sRGB的色彩
當Photoshop裡挑選顏色時出現驚歎號,即是該顏色無法被列印出來
Pixel
This article is about the picture element. For other uses, see Pixel (disambiguation).
A pixel (short for picture element, using the common abbreviation "pix" for "pictures") is a single point in a graphic image. Each such information element is not really a dot, nor a square, but an abstract sample. With care, pixels in an image can be reproduced at any size without the appearance of visible dots or squares; but in many contexts, they are reproduced as dots or squares and can be visibly distinct when not fine enough. The intensity of each pixel is variable; in color systems, each pixel has typically three or four dimensions of variability such as red, green, and blue, or cyan, magenta, yellow, and black.
Pixel,X=座標
單位尺寸下總像素才是取決一個影像的解析度
從前300萬畫素的晶片和現在1000萬畫素的晶片一樣大
同樣單位裡的畫素變小、變多,原本90nm 變成65nm
同樣五百萬象素,CCD晶片越大不會影響像素大小
晶片面積越大鏡頭所需解析度就不用那麼高,對光線比較敏感,影響影像品質
一個CCD裡面感光的地方不到一半的面積
富士公司使用八邊型的CCD,晶片排得比較密,同樣像素時,解析度比較好(Super CCD)
A pixel is generally thought of as the smallest single component of an image. The definition is highly context sensitive; for example, we can speak of printed pixels in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera (photosensor elements). This list is not exhaustive, and depending on context there are several terms that are synonymous in particular contexts, e.g. pel, sample, byte, bit, dot, spot, etc. We can also speak of pixels in the abstract, or as a unit of measure, in particular when using pixels as a measure of resolution, e.g. 2400 pixels per inch, 640 pixels per line, or spaced 10 pixels apart.
The measures dots per inch (dpi) and pixels per inch (ppi) are sometimes used interchangeably, but have distinct meanings especially in the printer field, where dpi is a measure of the printer's resolution of dot printing (e.g. ink droplet density). For example, a high-quality inkjet image may be printed with 200 ppi on a 720 dpi printer.
The more pixels used to represent an image, the closer the result can resemble the original. The number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition. Pixel counts can be expressed as a single number, as in a "three-megapixel" digital camera, which has a nominal three million pixels, or as a pair of numbers, as in a "640 by 480 display", which has 640 pixels from side to side and 480 from top to bottom (as in a VGA display), and therefore has a total number of 640 × 480 = 307,200 pixels or 0.3 megapixels.
The pixels, or color samples, that form a digitized image (such as a JPEG file used on a web page) may or may not be in one-to-one correspondence with screen pixels, depending on how a computer displays an image.
In computing, an image composed of pixels is known as a bitmapped image or a raster image. The word raster originates from halftone printing technology, and has been widely used to describe television scanning patterns.
[edit] Display pixel size
The size of a display pixel is determined by the screen resolution and diagonal size of the monitor displaying it. Some Examples:
* Screen Res: 1024x768, Diagonal Size: 19", Pixel size: 0.377mm
* Screen Res: 800x600, Diagonal Size: 17", Pixel size: 0.4318mm
* Screen Res: 640x480, Diagonal Size: 15", Pixel size: 0.4763mm
[edit] Native vs. logical pixels in LCDs
Modern computer monitors are expected to display a range of resolutions (this was not always so, even with CRTs). Displays capable of truly displaying only one resolution must first generate a native-resolution signal from any signal in a non-native resolution.
Modern computer LCDs are designed with a native resolution which refers to the perfect match between pixels and triads. CRT displays also use native red-green-blue phosphor triads, but these are not coincident with logical pixels.
The native resolution will produce the sharpest picture capable from the display. However, since the user can adjust the resolution, the monitor must be capable of displaying other resolutions. Non-native resolutions have to be supported by approximate resampling in the LCD controller, using interpolation algorithms (in CRTs, the physical system interpolates between the logical pixels and the physical phosphors). This often causes the screen to look somewhat jagged or blurry (especially with resolutions that are not even multiples of the native one). For example, a display with a native resolution of 1280×1024 will look best set at 1280×1024 resolution, will display 800×600 adequately by drawing each pixel with more physical triads, but will be unable to display in 1600×1200 sharply due to the lack of physical triads.
Pixels can be either rectangular or square. Pixels on computer monitors are usually square, but pixels used in some digital video formats have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding anamorphic widescreen formats.
Each pixel in a monochrome image has its own value, a correlate of perceptual brightness or physical intensity. A numeric representation of zero usually represents black, and the maximum value possible represents white. For example, in an eight-bit image, the maximum unsigned value that can be stored by eight bits is 255, so this is the value used for white.
In a color image, each pixel can be described using its hue, saturation, and value (HSV), but is usually represented instead as the red, green, and blue intensities (in its RGB color space).
[edit] Bits per pixel
Main article: Color depth
The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). The maximum number of colors a pixel can take can be found by taking two to the power of the color depth. For example, common values are
* 8 bpp, 28 = 256 colors
* 16 bpp, 216 = 65536 colors; known as Highcolor or Thousands
* 24 bpp, 224 = 16,777,216 colors; known as Truecolor or Millions
* 48 bpp; for all practical purposes a continuous colorspace; used in many flatbed scanners and for professional work
Images composed of 256 colors or fewer are usually stored in the computer's video memory in packed pixel (chunky) format, or sometimes in planar format, where a pixel in memory is an index into a list of colors called a palette. These modes are therefore sometimes called indexed modes. While only 256 colors are displayed at once, those 256 colors are picked from a much larger palette, typically of 16 million colors. Changing the values in the palette permits a kind of animation effect. The animated startup logos of Windows 95 and Windows 98 are probably the best-known example of this kind of animation. On older systems, 4 bpp (16 colors) was common.
For depths larger than 8 bits, the number is the sum of the bits devoted to each of the three RGB (red, green and blue) components. A 16-bit depth is usually divided into five bits for each of red and blue, and six bits for green, as most human eyes are more sensitive to green than the other two primary colors. For applications involving transparency, the 16 bits may be divided into five bits each of red, green, and blue, with one bit left for transparency. A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is available: this means that each 24-bit pixel has an extra 8 bits to describe its opacity (for purposes of combining with another image).
When an image file is displayed on a screen, the number of bits per pixel is expressed separately for the raster file and for the display. Some raster file formats have a greater bit-depth capability than others. The GIF format, for example, has a maximum depth of 8 bits, while TIFF files can handle 48-bit pixels. There are no consumer display adapters that can output 48 bits of color, so this depth is typically used for specialized professional applications with film scanners, printers and very expensive workstation computers. Such files are only rendered on screen with 24-bit depth on most computers.
Subpixels
Phosphor dots in a color CRT display bear no relation to pixels or subpixels
Phosphor dots in a color CRT display bear no relation to pixels or subpixels
Pixel geometry of various CRT and LCD displays.
Pixel geometry of various CRT and LCD displays.
Many display and image-acquisition systems are, for various reasons, not capable of displaying or sensing the different color channels at the same site. Therefore, the pixel grid is divided into single-color regions that contribute to the displayed or sensed color when viewed at a distance.
In some displays, such as LCD, LED, and plasma displays, these single-color regions are separately addressable elements, which have come to be known as subpixels. For example, LCDs typically divide each pixel horizontally into three subpixels.
Most digital camera image sensors also use single-color sensor regions, for example using the Bayer filter pattern, but in the case of cameras these are known as pixels, not subpixels.
The latter approach has been used to increase the apparent resolution of color displays. The technique, referred to as subpixel rendering, uses knowledge of pixel geometry to manipulate the three colored subpixels separately and produce a better displayed image.
While CRT displays also use red-green-blue masked phosphor areas, dictated by a mesh grid called the shadow mask, these can not be aligned with the displayed pixel raster, and therefore can not be utilised for subpixel rendering.
A megapixel is 1 million pixels, and is a term used not only for the number of pixels in an image, but also to express the number of image sensor elements of digital cameras or the number of display elements of digital displays. For example, a camera with an array of 2048×1536 sensor elements is commonly said to have "3.1 megapixels" (2048 × 1536 = 3,145,728). The neologism sensel is sometimes used to describe the elements of a digital camera's sensor, since these are picture-detecting rather than picture-producing elements.[1]
Digital cameras use photosensitive electronics, either charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) image sensors, consisting of a large number of single sensor elements, each of which records a measured intensity level. In most digital cameras, the sensor array is covered with a patterned color filter mosaic having red, green, and blue regions in the Bayer filter arrangement, so that each sensor element can record the intensity of a single primary color of light. The camera interpolates the color information of neighboring sensor elements, through a process called demosaicing, to create the final image. These sensor elements are often called "pixels", even though they only record 1 channel (only red, or green, or blue) of the final color image. Thus, two of the three color channels for each sensor must be interpolated and a so-called N-megapixel camera that produces an N-megapixel image provides only one-third of the information that an image of the same size could get from a scanner. Thus, certain color contrasts may look fuzzier than others, depending on the allocation of the primary colors (green has twice as many elements as red or blue in the Bayer arrangement).
In contrast to conventional image sensors, the Foveon X3 sensor uses three layers of sensor elements, so that it detects red, green, and blue intensity at each array location. This structure eliminates the need for de-mosaicing and eliminates the associated image artifacts, such as color blurring around sharp edges. Citing the precedent established by mosaic sensors, Foveon counts each single-color sensor element as a pixel, even though the native output file size has only one pixel per three camera pixels.[1] With this method of counting, an N-megapixel Foveon X3 sensor therefore captures the same amount of information as an N-megapixel Bayer-mosaic sensor, though it packs the information into fewer image pixels, without any interpolation.
感光晶片上一面小鏡子為了集中光線
同樣的光線被數位相機拍攝後比底片相機更易出現暗角
數位影像曝光過度的紫邊就是鏡片產生的
CMOS的感光範圍比CCD小,因為把放大電路也放在同一晶片上
Analog Digital C 數位類比轉換器
Sensor Color Deep
沒有留言:
張貼留言