CSE SOLUTION SITE



Computer Graphics & Multimedia Question List | Solve

Define geometric transformation

Geometric Transformation:
A geometric transformation is any bijection of a set having some geometric structure to itself or another such set. Specifically, "A geometric transformation is a function whose domain and range are sets of points. Most often the domain and range of a geometric transformation are both R2 or both R3.

Example:
             within transformation geometry, the properties of an isosceles triangle are deduced from the fact that it is mapped to itself by a reflection about a certain line. This contrasts with the classical proofs by the criteria for congruence of triangles.[1]

Define Coordinate transformation

Co-ordinate Transformation:
co-ordinate transformations are no intuitive enough in 2-D, and positively painful in 3-D. This page tackles them in the following order: (i) vectors in 2-D, (ii) tensors in 2-D, (iii) vectors in 3-D, (iv) tensors in 3-D, and finally (v) 4th rank tensor transforms. A major aspect of coordinate transforms is the evaluation of the transformation matrix, especially in 3-D. This is touched on here, and discussed at length on the next page. It is very important to recognize that all coordinate transforms on this page are rotations of the coordinate system while the object itself stays fixed. The "object" can be a vector such as force or velocity, or a tensor such as stress or strain in a component. Object rotations are discussed in later sections.

What do you mean by digital image?

Digital Image :
Digital imaging or digital image acquisition is the creation of digital images, such as of a physical scene or of the interior structure of an object. The term is often assumed to imply or include the processing, compression, storage, printing, and display of such images.

Digital imaging can be classified by the type of electromagnetic radiation or other waves whose variable attenuation, as they pass through or reflect off objects, conveys the information that constitutes the image. In all classes of digital imaging, the information is converted by image sensors into digital signals that are processed by a computer and outputted as a visible-light image. For example, the medium of visible light allows digital photography (including digital videography) with various kinds of digital cameras (including digital video cameras). X-rays allow digital X-ray imaging (digital radiography, fluoroscopy, and CT), and gamma rays allow digital gamma ray imaging (digital scintigraphy, SPECT, and PET). Sound allows ultrasonography (such as medical ultrasonography) and sonar, and radio waves allow radar. Digital imaging lends itself well to image analysis by software, as well as to image editing (including image manipulation).
 

What is data compression?

Data compression :
In digital signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation. Compression can be either loss or lossless.

Why data compression is needed?

Data compression is needed :
Data compression is needed because it allows the data to be stored in an area without taking up an unnecessary amount of space. Data compression uses a series of algorithms to reduce the amount of real space that the data would normally take up. The amount of data that is shown when it is compressed is dependent on how the data has been compressed. If there is an extremely large amount of data in a sequence, it will be compressed to a large size. If there is a small amount of data that is compressed, it will not take up as much real space as compressed large amounts of data. Determining the amount of space that the data will take up is dependent on the algorithm that is used to compress the data. If the algorithm is used properly and formatted specifically to the data, the data can take up nearly no real space. If a generalized algorithm is used, the data may not fit into a compressed area as well as if a specific algorithm is used. The algorithm is the determining factor on how well the data will compress, how much space it will take up and how much of the data will be available after compression.

Write down the Hufman coading algorithm used in loseless compression

The Hufman coading algorithm used in loseless compression :
1. Read a BMP image using image box control in Delphi language. The TImage control can be used to display a graphical image - Icon (ICO), Bitmap (BMP), Metafile (WMF), GIF, JPEG, etc. This control will read an image and convert them in a text file.

2. Call a function that will Sort or prioritize characters based on frequency count of each characters in file.

3. Call a function that will create an initial heap. Then reheap that tree according to occurrence of each node in the tree, lower the occurrence earlier it is attached in heap. Create a new node where the left child is the lowest in the sorted list and the right is the second lowest in the sorted list.

4. Build Huffman code tree based on prioritized list. Chop-off those two elements in the sorted list as they are now part of one node and add the probabilities. The result is the probability for the new node.

5. Perform insertion sort on the list with the new node.

6. Repeat STEPS 3,4,5 UNTIL you only have 1 node left.

7. Perform a traversal of tree to generate code table. This will determine code for each element of tree in the following way.
The code for each symbol may be obtained by tracing a path to the symbol from the root of the tree. A 1 is assigned for a branch in one direction and a 0 is assigned for a branch in the other direction. For example a symbol which is reached by branching right twice, then left once may be represented by the pattern '110'. The figure below depicts codes for nodes of a sample tree.
*
/ \
(0) (1)
/ \
(10)(11)
/ \
(110) (111)

8. Once a Huffman tree is built, Canonical Huffman codes, which require less information to rebuild, may be generated by the following steps: Step 1. Remember the lengths of the codes resulting from a Huffman tree generated per above. Step 2. Sort the symbols to be encoded by the lengths of their codes (use symbol value to break ties). Step 3. Initialize the current code to all zeros and assign code values to symbols from longest to shortest code as follows:

(A). If the current code length is greater than the length of the code for the current symbol, right shift off the extra bits.

(B). Assign the code to the current symbol.

(C). Increment the code value.

(D). Get the symbol with the next longest code.

(E). Repeat from A until all symbols are assigned codes.

9. Encoding Data- Once a Huffman code has been generated, data may be encoded simply by replacing each symbol with its code.

10. The original image is reconstructed i.e. decompression is done by using Huffman Decoding.

11.Generate a tree equivalent to the encoding tree. If you know the Huffman code for some encoded data, decoding may be accomplished by reading the encoded data one bit at a time. Once the bits read match a code for symbol, write out the symbol and start collecting bits again.

12. Read input character wise and left to the tree until last element is reached in the tree.

13. Output the character encodes in the leaf and returns to the root, and continues the step 12 until all the codes of corresponding symbols is known.

 

What is 2D Discrete Cosine Transformation(DCT)?

2D Discrete Cosine Transformation(DCT) :
The discrete cosine transform (DCT) helps separate the image into parts (or spectral sub-bands) of differing importance (with respect to the image's visual quality). The DCT is similar to the discrete Fourier transform: it transforms a signal or image from the spatial domain to the frequency domain.
 The basic operation of the DCT is as follows:

1. The input image is N by M;

2. f(i,j) is the intensity of the pixel in row i and column j;

3. F(u,v) is the DCT coefficient in row k1 and column k2 of the DCT matrix.

4. For most images, much of the signal energy lies at low frequencies; these appear in the upper left corner of the DCT.

5. Compression is achieved since the lower right values represent higher frequencies, and are often small - small enough to be neglected with little visible distortion.

6. The DCT input is an 8 by 8 array of integers. This array contains each pixel's gray scale level;

7. 8 bit pixels have levels from 0 to 255
 

Compare MPEG-I with MPEG-II?

Compare MPEG-I with MPEG-II :
1.  MPEG2 succeeded the MPEG1 to address some of the older standard’s weaknesses.

2.  MPEG2 has better quality than MPEG1.

3. MPEG1 is used for VCD while MPEG2 is used for DVD.

4.  One may consider MPEG2 as MPEG1 that supports higher resolutions and capable of using higher and variable bitrates.

5.  MPEG1 is older than MPEG2 but the former is arguably better in lower bitrates.

6.  MPEG2 has a more complex encoding algorithm.
 

Describe different types of broadcast video standards

Describe different types of broadcast video standards :
The first colour TV broadcast system was implemented in the United States in 1953. This was based on the NTSC - National Television System Committee standard. NTSC is used by many countries on the American continent as well as many Asian countries including Japan.
NTSC runs on 525 lines/frame.

The PAL - Phase Alternating Line standard was introduced in the early 1960's and implemented in most European countries except for France.
The PAL standard utilises a wider channel bandwidth than NTSC which allows for better picture quality.
PAL runs on 625 lines/frame.

The SECAM - Sequential Couleur Avec Memoire or Sequential Colour with Memory standard was introduced in the early 1960's and implemented in France. SECAM uses the same bandwidth as PAL but transmits the colour information sequentially.

SECAM runs on 625 lines/frame.
 

Explain in short the HDTV and DVI standards

The HDTV and DVI standards :
HDTV : 

Short for High-Definition Television, a new type of television that provides much better resolution than current televisions based on the NTSC standard. HDTV is a digital TV broadcasting format where the broadcast transmits widescreen pictures with more detail and quality than found in a standard analog television, or other digital television formats. HDTV is a type of Digital Television (DTV) broadcast, and is considered to be the best quality DTV format available. Types of HDTV displays include direct-view, plasma, rear screen, and front screen projection. HDTV requires an HDTV tuner to view and the most detailed HDTV format is 1080i.

 HDTV Minimum Performance Attributes:

Receiver:  Receives ATSC terrestrial digital transmissions and decodes all ATSC Table 3 video formats.

Display Scanning Format:  Has active vertical scanning lines of 720 progressive (720p), 1080 interlaced (1080i), or higher

Aspect Ratio Capable of displaying a 16:9 image1

 Audio: Receives and reproduces, and/or outputs Dolby Digital audio.

DVI : 

Digital Video Interactive:

Digital Video Interactive (DVI) was the first multimedia desktop video standard for IBM-compatible personal computers. It enabled full-screen, full motion video, and graphics to be presented on a DOS-based desktop computer. The scope of Digital Video Interactive encompasses a file format. Implementations:

The first implementation of DVI developed in the mid-80s relied on three 16-bit ISA cards installed inside the computer, one for audio processing, another for video.
Later DVI implementations only used one card, such as Intel's ActionMedia series
Compression

The DVI format specified two video compression schemes, Presentation Level Video or Production Level Video (PLV) and Real-Time Video (RTV) and two audio compression schemes, ADPCM and PCM8.[3][1].

The original video compression scheme, called Presentation Level Video (PLV), was asymmetric in that a Digital VAX-11/750 minicomputer was used to compress the video in non-real time to 30.
 

Describe the YIQ and CMYK color mode

Describe the YIQ and CMYK color mode :
YIQ:
The YIQ color space at Y=0.5. the I and Q chroma coordinates are scaled up to 1.0.

An image along with its Y, I, and Q components.

YIQ is the color space used by the NTSC color TV system, employed mainly in North and Central America, and Japan. I stands for in-phase, while Q stands for quadrature, referring to the components used in quadrature amplitude modulation. Some forms of NTSC now use the YUV color space, which is also used by other systems such as PAL.

The Y component represents the luma information, and is the only component used by black-and-white television receivers. I and Q represent the chrominance information. In YUV, the U and V components can be thought of as X and Y coordinates within the color space. I and Q can be thought of as a second pair of axes on the same graph, rotated 33°; therefore IQ and UV represent different coordinate systems on the same plane.

The YIQ system is intended to take advantage of human color-response characteristics. The eye is more sensitive to changes in the orange-blue (I) range than in the purple-green range (Q) — therefore less bandwidth is required for Q than for I. Broadcast NTSC limits I to 1.3 MHz and Q to 0.4 MHz. I and Q are frequency interleaved into the 4 MHz Y signal, which keeps the bandwidth of the overall signal down to 4.2 MHz. In YUV systems, since U and V both contain information in the orange-blue range, both components must be given the same amount of bandwidth as I to achieve similar color fidelity.

  CMYK Color Model:
"CMYK" redirects here. For the extended play by James Blake, see CMYK (EP).

"CMYB" redirects here. For the cMyb gene, see MYB (gene).

Color printing typically uses ink of four colors: cyan, magenta, yellow, and key (black).

When CMY “primaries” are combined at full strength, the resulting “secondary” mixtures are red, green, and blue. Mixing all three gives black.

The CMYK color model (process color, four color) is a subtractive color model, used in color printing, and is also used to describe the printing process itself. CMYK refers to the four inks used in some color printing: cyan, magenta, yellow, and key (black).

The "K" in CMYK stands for key because in four-color printing, cyan, magenta, and yellow printing plates are carefully keyed, or aligned, with the key of the black key plate.

The "black" generated by mixing commercially practical cyan, magenta and yellow inks is unsatisfactory, so four-color printing uses black ink in addition to the subtractive primaries. Common reasons for using black ink include:[6]

 

Write short notes on AVI and DVI

Write short notes on AVI and DVI :
Audio Video Interleave:
Audio Video Interleaved (also Audio Video Interleave). AVI files can contain both audio and video data in a file container that allows synchronous audio-with-video playback. the AVI files support multiple streaming audio and video,

Format:
AVI is a derivative of the Resource Interchange File Format (RIFF), which divides a file's data into blocks, or "chunks." Each "chunk" is identified. An AVI file takes the form of a single chunk in a RIFF formatted file

Limitations:
Since its introduction in the early 90s, new computer video techniques have been introduced which the original AVI specification did not anticipate.
AVI does not provide a standardized way to encode information, There are several competing approaches to including a time code in AVI files, although it is widely used.

AVI was not intended to contain video using any compression technique. Approaches exist to support modern video compression techniques (such as MPEG-4)

AVI cannot contain some specific types of (VBR) data reliably AVI files at the resolutions and frame rates normally used to encode standard definition feature films is about 5 MB per hour of video.


Digital Video Interactive:
Digital Video Interactive (DVI) was the first multimedia desktop video standard for IBM-compatible personal computers. It enabled full-screen, full motion video, and graphics to be presented on a DOS-based desktop computer. The scope of Digital Video Interactive encompasses a file format.

Implementations:
The first implementation of DVI developed in the mid-80s relied on three 16-bit ISA cards installed inside the computer, one for audio processing, another for video.

Later DVI implementations only used one card, such as Intel's ActionMedia series Compression.

The DVI format specified two video compression schemes, Presentation Level Video or Production Level Video (PLV) and Real-Time Video (RTV) and two audio compression schemes, ADPCM and PCM8.[3][1].

The original video compression scheme, called Presentation Level Video (PLV), was asymmetric in that a Digital VAX-11/750 minicomputer was used to compress the video in non-real time to 30.

 

Briefly explain the standards of TV and Video Broadcasting

Briefly explain the standards of TV and Video Broadcasting :
Broadcast television systems:
. Broadcast television systems are encoding or formatting standards for the transmission and reception of terrestrial television signals.

Frames:
Ignoring color, all television systems work in essentially the same manner. The monochrome image seen by a camera is divided into horizontal scan lines

Viewing technology:
Analog television signal standards are designed to be displayed on a cathode ray tube (CRT)
.
Overscan:
Television images are unique in that they must incorporate regions of the picture with reasonable-quality content.
 
Interlacing:
In a purely analog system, field order is merely a matter of convention

Image polarity:
Another parameter of analog television systems, minor by comparison, is the choice of whether vision modulation is positive or negative

Audio:
In analog television, the analog audio portion of a broadcast is invariably modulated separately from the video Evolution
In a few countries, most notably the United Kingdom, television broadcasting on VHF has been entirely shut down

Digital Video Broadcasting:
The Digital Video Broadcasting Project is an industry-led consortium of over 200 broadcasters, manufacturers, network operators, software developers and regulators from around the world committed to designing open technical standards for the delivery of digital television.

 

What are the merits and demerits of DVST?

Advantages and disadvantages of DVST:
DVST :  DVST stands for Direct View Storage Tube. It is one of the display devices in which an electron flood gun and writing gun is present. The flood gun floods electrons to a wire grid on which already the writing gun has written some image. The electrons from the flood gun will be repelled back by the negatively charged wire grid which has been charged so by the writing electron beam. The part of the wire grid which has not been charged -ve will allow the electrons to pass through and the electrons will collide on the screen and produce the image.

Advantages and disadvantages of DVST:

Advantages:

1. Refreshing is not essential.
2. Without flicker, very complex pictures can be exhibit at very high resolution.
 
Disadvantages:

1. They normally never display colour.
2. Selected part of picture never removed.
3. It can take quite a few seconds for composite pictures while redrawing and erasing process.

 

What is the difference between vector and raster graphics?

The difference between vector and raster graphics :
Raster Graphics:

1. raster graphics are composed of pixels.

2. A raster graphic, such as a gif or jpeg, is an array of pixels of various colors, which together form an image.

3. Raster graphics, on the other hand, become "blocky," since each pixel increases in size as the image is made larger.

4. Adobe Photoshop, GIMP, Krita, Corel Photopaint and Pixelmator are primarily raster.

5. Most digital painting programs and apps like ArtRage, Sketchbook, Layerpaint and Procreate are raster.

6. JPG, GIF, PNG, TIFF, BMP are all common raster image formats. PSDs (Photoshop. PDFs can contain both.

7. Raster Graphics are comprised of tiny squares of color information, which are usually referred to as pixels, or dots.

8. Raster Graphics are usually measured in Dots Per Inch (dpi) when creating images or graphics for print, or Pixels Per Inch (ppi) when creating images or graphics for web use, which allows you to measure how much detailed color information a specific image contains.

9. For example, if you have a 2 inch x 2 inch image at a resolution of 300 ppi, your image contains a total of 600 pixels of color that provide the detail, color, and shading information for your image.

10. Raster Graphic Examples: Stationery Printing, Catalogues, Flyers, Postcards, etc.

Vector Graphics:

1. vector graphics are composed of paths.

2. A vector graphic, such as an .eps file or Adobe Illustrator? file, is composed of paths, or lines, that are either straight or curved.

3. The data file for a vector image contains the points where the paths start and end, how much the paths curve, and the colors that either border or fill the paths. Because vector graphics are not made of pixels, the images can be scaled to be very large without losing quality.

4. Adobe Illustrator, Inkscape, Sketch, Affinity Designer and Corel Draw are primarily vector.

5. Most CAD and 3D rendering programs like AutoCAD, Maya, Blender and Cinema4D work with (more complex) vectors.

6. EPS, SVG and AI (Illustrator) are the most common vector formats. They can all contain embedded raster images. PDFs can contain both.

7. Vector Application Examples: Large-format signage, vehicle wraps, window graphics, vinyl lettering, etc.

8.  For example, if you have a 2 inch x 2 inch image at a resolution of 300 ppi, your image contains a total of 600 pixels of color that provide the detail, color, and shading information for your image.

9. While Raster Graphics are comprised of individual pixels, Vector Graphics are built using mathematically defined areas to produce shapes, lines, and curves. This is why Vector graphics are suited for graphic elements that are more geometric in nature such as shapes and text, whereas Raster Graphics are suited for more detailed graphics such.