TUTORIAL: How Film & Video Work (Resolution)

ReMastered™ Virtual Video™

Einstein: Director of Marking

iPhone Instructions: Click to Show/Hide

Concepts and Terminology of Resolution:

The Theory of Resolution in Translating Movie Film Images into Digital Video Images, is best demonstrated by a "shadow box" -- and the fingers of a Shadow Puppeteer.

Do you see the "shadow puppet" in the "shadow box"?

Film works based on "black shadow and white light". What you need to know about "shadow and light" is -- where one ends, the other begins. This "beginning and end" -- is "a boundary" -- which forms an "out-line" -- which mathmatically, is "a line". "A line" is "linear data". This "line" is the only "data" known about the fingers that formed the shadow.

The "black area" -- inside the shadow -- and the "light area" -- around the shadow -- offer ABOSOLUTELY NO MORE DATA (DETAIL INFORMATION)about the image. Thus the "black" and the "white" are "non-image data".

I call "the shadow" itself "Proxy Data", because "the shadow" take the place of "the fingers", in the shadow box -- and takes the place of "siver halide" in the "projected image" -- from movie film.

As you can see, magnifying any part of the shadow 10 times or magnifying any part of the raw light area, even, 10,000,000 times -- to get MORE INFORMATION -- will give you ABSOLUTINELY ZERO "aditional" information about the "shadow puppet".

So, what if some "wise guy" record "the thumb" instead of "the shadow; if he magnifies the "thumb" itself? Does the color of the skin, flakes of skin, and folds in the skin, hair on the knuckle, and the dirt under the finger nail -- add MORE DETAIL (data) about the "shadow puppet image"?

Absolutly NOT! In fact, if we ADDED details about "the thumb" -- mangnified many times -- to the "shadow puppet" -- we get "artifacts"!

If you magnify the "artifacts" -- you get a "digital blocks" -- made up of Red Blue and Green dots -- inside of what was once -- "a nice black shadow"!

Bottom line is -- "the line" -- is the ONLY TRUE DATA we have about our "shadow puppet". Everything else -- and there is A LOT OF MOSTLY everything else -- including the "proxy data" -- is "non-image data".

MOVIE FILM IMAGES are made from millions of this "LINES" -- where "shadow and light" meet. These "lines" are the ONLY information about the "shapes and surfaces of objects" -- which were "exposed" to the "original moving film". Shining light through the film, reveals both "true-image" data -- and many, many, many times MORE -- "non-image" data.

In movie film, anything that is "one-dimentional linear data"(with length only) is "true image data". Anything that is "two-dimentional area data" (with height and width) is "non-image data".

Therefore, a CODEC -- specificly for the digitalization of MOVIE FILM IMAGES -- would increase "linear density" and decreae "area density" of an image.

Most "film transfer" machines -- do the opposite; and the larger the video format used in the capture, the more damage is done.

For his reason, using a massively dense 4K CODEC to "capture" a tiny frame of 8mm film -- is a "stupidity" -- that you are now "smart enough" to recongnize -- grab your wallet -- and click off the website.

Bottom "line":

The "highest resolution" of AN IMAGE on movie film, can be only "as fine" (resolute with a firm purpose) as the average size of the tiny grains of silver halide in the original movie film.

Other secondary factors, of course, are

  1. The "optical resolution" of the original lens; and,
  2. The "integrity of focus" within the original focal plane.

No "higher resolution" lens or "higher resoluton" video format used -- after the film is exposed -- can inprove the native resolution of the image in original movie film.

That said, we do have our "tricks of the trade" -- to make the final image "better than the original".

"RESOLUTION" is a noun that means: firmness of purpose or intent; determination.

The word "LINE" is a noun that means: 1) Mathematics. a continuous extent of length, straight or curved, unidimensional (ONE-dimentional) - without breadth or thickness; the trace of a moving point. 2) an indication of demarcation; boundary; limit:

The words "LINES OF RESOLUTION" refers to a field of lines -- without linear conversion -- with "firm boudaries" for visual purpose.

In math, where "lines" or "boundaries" blur or blend -- they take on non-linear properties -- of height and width -- and become mathmatical "area". Firm purpose is lost. An "AREA"(of non-purpose) is called noise.

Bottom "line", when lines "blur" or "blend" -- their RESOLUTION of "firm purpose" -- is LOST.

MOVIE FILM HAS LINES OF RESOLUTION:

Horizonatl lines, vertical lines, and diagonal lines.

LINES OF RESOLUTION as a Modulation Transfer Function (MTF)

MTF Chart

FIGURE 1: "EIA Resolution Chart 1956" Licensed under Public Domain via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:EIA_Resolution_Chart_1956.svg#mediaviewer/File:EIA_Resolution_Chart_1956.svg

The converging lines (above) measure resolution -- as view thru a camera. Where the lines BLUR resolution is lost. This chart shows the limitation of TV in 1956.

I use this chart here, only because many OLDER COMPUTER MONTITORS will show this chart correctly.

ALL film schools and film professionals measure "film resolution" in terms of "lines" -- using a system called the Modulation Transfer Function (MTF). MTF is "the industry standard" for quality control and measurement "of film ". It is also used to compare "film " to "digital" formats.

See example of MTF chart at link below:

Reference: http://films choolonline.com/sample_lessons/sample_lesson_HD_vs_35mm.htm

Many tools are used to measure RESOLUTION -- like the EIA Resolution Chart above. A more current state-of-the-art standard for measuring MTF in digital cameras is an ISO 12233 target.

See Goggle Search for images for MTF Standard Resolution Chart Film

WHAT IS IMPORANT FOR YOU TO KNOW:

It is a common practice to compare horizontal and vertical lines of resolution for both film and video AND there are many tools with which to do that -- most especially MTF.

SHADOW AND PARTICLE RESOLUTION IN BLACK & WHITE MOVIE FILM:

Figure 2A Reference: Kodak.com

FIGURE 2: "Depiction" of Microscopic View of a Tiny Segment of a Frame of Movie Film (above) NOTE: In real live, silver halide is a crystaline structure (see Figure 2A above). For simplicity, I show more rounded particles.

Copyright 2010 Bruce Mayfield all rights reserved.

The IMAGE™ above shows the key elements of Black & White movie film. In a film-to-video Transfer, the RESOLUTION of the VIDEO format used is determined by the MTF of the movie film, that is to say the size of the GRAIN or GRAIN SHADOW in the movie film.

Once 100% of the SIZE OF THE GRAIN has be CAPTURED in a VIDEO FORMAT -- as a singel cluster of RGB informantion, NO MORE RESOLUTION can be achieve.

Figure 2B: Crude example of how "digital doubling technology" in a video CODEC creates "digital block" from a single "data point" in "movie film". "Best Matching" a tiny frame of 8mm film to a video CODEC is superior to "Hyper-Formatting" (i.e., Capturing "small format film images" to "over-sized video formats"). In reality, each black dot above would be a cluster of Red Blue Gree (RGB) data. That means 3 RGB points to 1 Black point. Linear "resolution" -- where shadow meets light -- is quickly lost -- forever!

LARGER VIDEO FORMATS subdivide the integrity of each shadow area -- using many RGB clusters to represent each black speck. This is because the technology of movie film works very differently than the technology of video. (More on this: See Shadow Box picture below)

In testing for RESOLUTION, finding the average height and width of a grain of silver halide (see IMAGE™ above) would be the objective of an MTF measurement -- for optimal GRAINULAR RESOLUTION. The degree of MTF Resolution is determined by the average size of particles of silver halide in the emulsion side of the movie film.

MATCHING a video format -- to a film format -- will allow a FULL GRAIN or GRAIN SHADOW to be expressed as no LESS than ONE RGB pixel cluster and NO MORE than 4 RGB pixel clusters. Anything beyond this "match", is called "Hyper-Formatting".


Why? A circle can be subdivided in to four quarters - each quarter circle preserving 'AN EDGE" of the original circle. This is a theoretical target, however, the greater the subdivision of each partial, the greater chance of LARGER digital artifacts - due to the differences between the way film and video technology work (more on differences below).

Each particle or "grain" can be thought of as a "circular bit " of data -- like an "on" or "off" bit -- in binary math used by a computer. The "bit" itself is NOT PART OF THE IMAGE™ but rather defines the "RESOLUTION" of a "BOUNDARY" of something within THE IMAGE™.

In fact, every PHYSICAL GRAIN or GRAIN SHADOW is PROXY DATA -- which is to say, a PHHYSICAL GRAIN or GRAIN SHADOW takes the place of "true IMAGE™ detail". Only the very "edge" of the PHHYSICAL GRAIN or GRAIN SHADOW -- stores "true image detail" about the "original objects" -- exposed to ligth -- and recorded on the movie film.

The BLACK AREA of the PROXY DATA in movie film, in fact MASKS true IMAGE detail or color data -- that SHOULD BE THERE -- in place of -- in the PHYSICAL SURFACE AREA of the GRAIN of SILVER HALIDE -- which creates the BLACK AREA of the GRAIN SHADOW.

Video has "proxy data" in the form of three colors -- Red Blue Green (RGB) which also masks "the original image" -- which I take about later. Look at any digial video display with a magnifying glass. You will NOT see more detail in the image -- but instead you will see hundred of RGB pixels -- in proxy to details "that should be there". More latter.

Back to movie film...

Only the very EDGE -- the edge of the PHYSICAL GRAIN(where matter blocks light energy) -- or the very edge of the GRAIN SHADOW (where light meets dark) -- can be regarded a "true-IMAGE™ detail".

The SHADOW MASS -- The (two-dimentional) AREA OF BLACK MASS (see picture above) is NOT 'true-IMAGE™ data". It is only proxy data -- which "implys" -- something was there -- something between the lines -- where light meets shadow.

RESOLUTION in FILM is the ONE-dimentional line separting light and darkness. This "line" is "a line of truth" -- the ONLY TRUE DATA about the original "photographed image" -- in a frame of film. Millions of these tiny "lines of truth" reveal "the true image" stored in a frame of film.

The trick is to retain this "boundary line" -- in a digital video image -- as 3 points of RGB color.

Most people in video "don't get this". They mistakenly think forcing ONE-dimentional data into a LARGER TWO-dimentional video format will make a BETTER IMAGE™.

In fact, the LAW OF DEMINISHING RETURNS (or MARGINAL UTILITY) -- says add more of SOMETHING actually gains you LESS AND LESS VALUE -- as MORE AND MORE is added. This law certainly applies to trying to maximize the ONE-dimentional line of data in movie film -- into video -- which can quickly distort "a line" into "a block" of data.

EXAMPLE: Adding more water to the soup -- to "stretch the soup" -- does NOT make BETTER soup, only MORE soup. And, there is a point "of diminishing returns" at which MORE is WORSE!

EXAMPLE: Adding too much video format to the film IMAGE™, is like adding too much water to the soup. You do NOT get better soup or better images. Stretching a tiny 8mm image of 4:3 ratio onto a huge 4K 9:16 video format -- is a fool's errand.

NOTE: We produce great video from 8mm film -- for display on a 4K screen; but "hyper-formating" is NOT the way we do!

FILM TRANSFER TO VIDEO is the delicate art of matching the MTF of movie film with the MTF of a video format in such a way as to capture ALL the "true IMAGE™ detail" within the movie film without polluting the IMAGE™ with artifacts form video technology.

SHADOW AND PARTICLE RESOLUTION CONTINUTED:

Reference: Kodak.com

Take care of DETAILS and the BIG PICTURE takes care of itself.

Movie film is made up of microscopic, opaque non-IMAGE™ masses -- called "grains" (of silver halide -- see picture above and example below).
The size of these grains determine

Stated another way...

In a film-to-video transfer, the video IMAGE™ can have NO MORE RESOLUTION that the movie film.
Therefore, film grain is very important in choosing which video format to use.

The limiting factor to IMAGE™ resolution in a Movie Film Transfer , is the MTF of the Movie Film itself -- that is to say -- the size of the grains of silver halide in your movie film.

Transferring a tiny frame of movie film to a very large video format does NOT create more resolution than can be found in the original movie film; however,
Transferring a tiny frame of movie film to a very large video format does create digital artifacts!
Transferring any frame of movie film to a TOO SMALL video format causes the LOSS of data.
The best film transfers match the film format to the video format.

FACT: A tiny 8mm frame of film, is 15,600 times SMALLER than a 50 inch HDTV screen!

FACT: 35mm film, according to MTF measurements of LINES OF RESOLUTION, approaches the lines of resolution of an HDTV.

Reference: http://films choolonline.com/sample_lessons/sample_lesson_HD_vs_35mm.htm

Comparing 35mm film to 8mm film: In comparing GROSS AREA ONLY -- with NO allowance for grain size or the space for pull-down pinhole in the film..

FACT: The GROSS AREA of a frame of 35mm film is 918.75 square mm; the GROSS AREA of a frame of 8mm film is 48 square mm.
That means it would take just under 20 frames of 8mm film to PHYSICALLY fill a 35mm frame of film.

If you consider that 8mm silver halide grain from the 1940 era...was "mechanically milled" and the silver halide grain of present day 35mm is "electrochemically distilled", a doubling of 20 frames to 40 frames of 8mm film would be arguably fair and accurate.

So, to generalize how many TIMES a frame of 8mm film must be multiplied for an equivalent size in HDTV technology, one could argue roughly 40 times.

What this means, is HDTV technology will "segment" each grain of film -- a few times -- using several RGB pixel configurations; however, "Hyper-Formating" to video formats like 4K or RED format will "segment" each grain of film -- MANY TIMES.

FOR EXAMPLE: A cluster of 3 Red Green Blue pixels (turned off) can be argued to faithfully reproduce a single grain of silver halide."
Enlarging that to...
A cluster of 12 Red Green Blue pixels (turned off) can be argued to unfaithfully reproduce a single grain of silver halide -- with artifacts -- because the CIRCULAR GRAIN is now a SQUARE.
Enlarging that (hyper-formating) to...
A cluster of 48 Red Green Blue pixels (turned off) can be argued to destroy 100% of the integrity of the original grain -- as NOTHING BUT ARTIFACT -- because the square is now a SQARE BLOCK -- a visible artifact. This is why most artifacts are 'DIGITAL BLOCKS".

At this level of distortion, the CODEC will "try" to supplement RGB data into each of the many subdivisions of each GRAIN of BLACK AREA-- in an attempt to APROXIMATE what should be there. Effectively this creates GRAIN-ON-GRAIN. This is especially true when the movie film ITSELF is SCANNED -- instead of capturing a PROJECTED IMAGE™.

NOTE: Most studios "scan" film -- directly from the "film" face. They are taking pictures of the actual "grains of silver halide" -- NOT the SHADOWS of the GRAINS. Assuming that the SHAPE of each grain could be preserved in "hyper-formatting", another problem is, using too large a video format -- like 4K or RED -- for the format of the movie film, DETAILS on the surface of each grain ARE CAPTURED or MANUFACTUED -- becoming part of the overall IMAGE™ from the film. THESE ARE ARTIFACTS CREATED BY THE DIGITAL CAPTURE PROCESS -- creating GRAIN ON THE GRAIN -- which was NEVER INTENDED TO BE PART OF THE ORIGINAL IMAG.(see IMAGE™ below).

Figure 3: Grain-on-Grain: Details NOT part of the Original IMAGE™.
Copyright 2010 Bruce Mayfield all rights reserved.

It is bad enough that the BLACK AREAS are actually PROXY DATA for both 1) true-IMAGE™ detail and 2) color space in a film IMAGE™. Worse yet is when GRAINS are blown up -- say 80 times -- in OVER-SIZED video formats. The OVER-SIZED video format tries to create data -- where there obviously should be data. That's what CODECs do! However, with movie film, this is the wrong thing to do. Once this BLACK AREA from movie film is polluted, this "garbage data" is visible to the naked eye -- at best as "random" Specks of RGB light; at worst as DIGITAL BLOCKS.

THE BIGGER PROBLEM -- THE CURE!

Because studios CREATE THE PROBEM they are obligated to CURE THE PROBLEM. To do that, they use SMOOTHING FILTERS -- which blend TWO-DIMENTIONAL DATA into ONE-DIMENTIONAL DATA (see IMAGE™ below)-- DESTROYING THE ONE-dimentional data.

Notice the GRAIN-ON-THE-GRAIN is subdued but not gone (if your computer monitor has enough RESOLUTION to display this detail below) however, also notice that the sharp "edge of the shadow" (giving the IMAGE™ its RESOLUTION of PURPOSE) is lost, too. As you can see, nothing has been gained and much has been lost, by transferring a tiny 8mm frame of film to an over-sized video format -- and then trying to 'fix it". The result is a final IMAGE™ -- WITH A FLAT LOOK -- where you can NOT see the grain (i.e., details in surfaces -- in your movie film IMAGE™ -- which means your IMAGE™ is "out of focus" at the level of the GRAIN and GRAIN SHADOW.

The INSULT to this INJURY, is that "8mm Film-Transfer Mills" will tell you "this is GOOD!" Telling you that "You do not want to see the grain in movie film".

What they are actually saying, is they do not want you to see how badly THEY screwed-up the grain in your movie film.

Well, look at Figures 2,3, and 4 -- yourself -- and decide for yourself -- which GRAIN PATTERNS make the best images --

Figure 4: Grain out-of-focus due to smoothing filter.
Copyright 2010 Bruce Mayfield all rights reserved.

We actually did apply a smoothing filter to Figure 3 to create Figure 4.

First Rule of Computer Science :

"If you put garbage data in; you will get garbage data out".

It is far better to give your HDTV "clean" BOUNDARIES of RESOLUTION,
than to give your HDTV "muddy" BOUNDARIES of RESOLUTION.

Your video archive files should have detail at the GRAIN and GRAIN SHADOW LEVEL more like Figure 2 above and NOT like Figures 3 and 4 above.

HOW VIDEO WORKS

FIGURE 5: "Additive Color" by Original uploader was SharkD at en.wikipedia Later versions were uploaded by Jacobolus at en.wikipedia. - Originally from en.wikipedia; description page is/was here.. Licensed under Public Domain via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:AdditiveColor.svg#mediaviewer/File:AdditiveColor.svg

Figure 5 shows how the 3 (Primary) colors Red, Green, Blue (RGB) create 4 other (Secondary) colors, Magenta (pink-purple), Yellow, and Cyan (aqua blue or sky blue)-- and pure White light. Figure 6 (below) demonstrates this principle in a real life photo.

IN LAYMAN'S TERMS...

THE WAY VIDEO WORKS
is by CONTROLING the intensity of 3 COLOR EMITTING PIXELS - Red, Green, Blue (RGB) -- in combinations that create ALL colors visible to the human eye.

Pixels are so close together that when all three pixels are turned on -- at equal intensity -- the human eye sees WHITE LIGHT -- see the center of the figure above.

Black is considered a color when the ALL 3 pixels are turned off.

RESOLUTION can never be smaller than a 3-pixel cluster (3 pixels surrounded by black) -- which has TWO-dimentions -- height and width. It is the RESOLUTION OF THE VIDEO DISPLAY that determines RESOLUTION -- based on information TRANSLATED from the video format.

I say TRANSLATES, because a video display always converts a digital video signal to (BRACE YOURSELF FOR THIS) an analogue signal. This is because (BRACE YOURSELF FOR THIS) YOUR EYEBALLS ARE ANALOGUE RECEIVERS!

Video creates RESOLUTION OF BOUNDARY by the contrast in RGB Color or Intensity between two RGB Pixel clusters. Because there is BLACK AREA between each RGB Pixel cluster, video "looks sharper".

THE WAY FILM WORKS
is by the used of microscopic particles or GRAINS of Silver Halide -- which turn black.

Referred to simply as, "grain" -- in a frame of movie film, grain acts LIKE tiny fingers in a "shadow box" -- to make "shadow puppets".

That is to say, the grains and grain clusters of silver halide are arranged in patterns which "block light" -- to create a TWO-dimentional "black shadow images" -- in the surrounding "raw spacial light".

Copyright 2016 Bruce Mayfield all rights reserved.

In the example above, the puppeteer's fingers (upper right) "act like" grains of silver halide. Light passes "between" the fingers -- like light passing "between specks of silver halide" -- to form a "shadow image" (lower left) -- in the "frame" of the "shadow box" -- for the audience to see on the other side of the box.

All the audience sees, on their side of the frame --is the "shadow" -- not the fingers.

All you see -- at the movies -- are "tiny shadows and (colored) light" projected onto the "silver screen" -- not the physical film or physical specs of silver halide -- in back of the theator -- in the projector. Only by sitting too close to the screen, do you become distracted by "tiny shadows and "(colored) light" -- that make up the overall "image".

Back at the shadow box (above), nether zooming into the fingers -- nor zooming into dark areas of the shadow -- nor zooming in to areas of raw light (around the shadow) -- will produce "more resolution".

The only "true image detail" is "the mathematical line" -- "not the mathematical area" -- where "shadow meets light".

Therefore...

Nether shooting the fingers with 4K camera (verses an HDTV camera) -- nor shooting the dark areas of the shadow with a 4K camera (verses an HDTV camera)-- nor shooting the raw light areas with a 4K camera (verses an HDTV camera) -- will produce "more resolution".

Again, the only "true image detail" is "the mathematical line" (a single-dimentionoal construct) -- "not the mathematical area"(a two-dimentional construct) -- where "shadow meets light".

Bottom Line:

Resolution is determined by "the focus" of the "original lens" -- used to shoot the "original object" -- captured as an image -- to the original movie film".

The use of "hyper-formated" video CODECs only add "noise" -- in the "areas of shadow" and into the "areas of (colored) light". This "two-dimentional noise"(clusters of Red Blue Green) actually breaks-up the original "single-dimentional (black and white) line of focus".

The use of "undersized" video CODECs create digital blocks -- which also "break-up" the original "line of focus".

MATCHING the "film format" to the "video format" CODECs optimizes the image of the original movie film. Once optimally digitized, modern video compression and video reformatting CODECs can "then" be applied for "enhancement" purposes.

Final Line to the Bottom Line:

There is only so much information in a tiny frame of 8mm movie film. There are diminishing returns and then negative returns -- using a "bigger hammer" to drive a "tiny nail" -- or using a "bigger video format" to capture a "tiny frame of film".

 

 

 

 

Thus MATCHING VIDEO FORMAT -- TO THE FILM FORMAT -- IS MOST INPORTANT.

Once the "optimal resolution" of the "film" or "fingers" is reached

Most technology from last century "shot the grains" -- not the shadow. As you can imagine, using higher magnification 'on the hand/grains" will NOT make the "shadow any sharper".

For example, shooting the fingers with, say, a 4K camera, might allow you to zoom into dirt under the fingernails , but that visual information is NOT part of the "shadow image" -- created by the "outline of the fingers".

Therefore, RESOLUTION is determined by the "outline of fingers" or the "outline of the particles of silver halide".

 

In the absence of any color dyes (like black and white film) raw white light surrounds the shadows -- yielding a black and white IMAGE™.

With color film, the "raw spacial light" is made up of three primary colors Red, Green, Blue (RGB). These RGB colors originates from RGB dyes in the film -- that surround the particles of silver halide. When light pass through the film -- colors surround the" SHADOWS of the GRAIN" in the projected IMAGE™.

The EDGE -- where Red, Green, Blue (RGB) COLORS COME TOGETHER - can be argued to also be a BOUNDARY OF PURPOSE -- however, such boundaries of purpose will also have GRAIN DENSITY associated where the colors come together -- since the DENSITY (lightness or darkness) of Red, Green, Blue (RGB)is controlled by the DENSITY OF GRAIN CLUSTERS -- the same as The Grey Scale varies in intensify in black and white movie film. Thus a BOUNDARY of PURPOSE -- using color in movie film -- is different than a BOUNDARY of PURPOSE -- using color in video.

For this reason, it can be argued that FILM "looks softer" but "better" -- than video -- on the BIG SCREENS -- and I might add,"at this time".

Finally, a movie film IMAGE™ is "designed" to be projected onto a "silver screen" (more on the significance of this later).

A film IMAGE™ was NOT designed to be viewed from the film itself. Viewing film directly introduces lens flair form raw light. Scratches in the film are seen as raw white light, which are subdued by the "silver screen" and by "laws of reciprocity" (i.e., the spreading and bending - blending - of light in air).

More on this later.

IN MORE TECHNICAL TERMS..

SHADOW AND PARTICLE RESOLUTION IN MOVIE FILM:

FIGURE: Depiction of Microscopic View of a Tiny Segment of a Frame of Movie Film (above) Black dots represent physical silver halide in the emulsion of the frame of movie film.

Copyright 2010 Bruce Mayfield all rights reserved.

THE WAY FILM WORKS:

Copyright 2016 Bruce Mayfield all rights reserved.

With Movie Film technology, the detail in an IMAGE™ both STARTS AND STOPS at the edge of THE SHADOW -- of every GRAIN of silver halide (like the shadow of fingers).

The SHADOW AREA of each GRAIN (like the fingers themselves) is non-IMAGE™ data -- called BLACK AREA (see example above). It is this BOUNDARY -- a "LINE OF RESOLUTION" -- of where shadow meets light -- THE EDGE of the BLACK AREA of a SHADOW -- that is the ONLY DATA -- ABOUT IMAGE™ DETAIL -- in the film IMAGE™. The BLACK AREA is NOT IMAGE™ DETAIL, but is rather PROXY DATA -- where OTHER IMAGE™ DETAIL SHOULD BE LOGICALLY -- BUT IS NOT.

"LINES OF RESOLUTION" means: "A boundary of firm purpose" of ONE-dimention - where ONE THING ENDS and ANOTHER THING BEGINS.

The word LINE is important here, because mathematically, A LINE is ONE-dimentional data, that is to say a LENGTH WITHOUT HEIGHT OR WIDTH. Video CODECs and "video people" DON'T GET THIS!

IN FILM, an EDGE or a LINE or a BOUNDARY can be argued to be NOT VISABLE, although it can also be argued to be OBSERVABLE as a STARK CONTAST between two AREAS -- WHERE ONE THING ENDS AND ANOTHER THING BEGINS -- the way the human eye sees objects in 'REAL LIFE".

This paradox of "being invisible; yet being observable" is easily explained in mathematics as, for example, the TWO-dimensional AREA of a CIRCLE and as the ONE-dimentional linear CIRCONFRENCE of the same CIRCLE.

FIGURE: "RGB illumination" by en:User:Bb3cxv - http://en.wikipedia.org/wiki/IMAGE™:RGB_illumination.jpg. Licensed under CC BY-SA 3.0 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:RGB_illumination.jpg#mediaviewer/File:RGB_illumination.jpg

Figure 6 DEMONSTRATES Red, Green, Blue (RGB)theory -- of Figure 5 above. It shows how Red, Green, Blue (RGB)raw light create 4 other (Secondary) colors, Magenta (pink-purple), Yellow, and Cyan (aqua blue or sky blue)-- and pure White light.

COLOR MOVIE FILM is simple -- once you understand Black & White Movie Film.

There are three dye layers -- Red, Green, and Blue (RGB) in Color film through which raw light passes. There is NO detail in raw light or raw color; however, the RGB colors DO BACK-LIGHT the GRAIN -- which results in GRAIN SHADOWES -- being surrounded by RGB COLORS.

Figure 6 above best demonstrates how RGB colors -- also called "Primary Colors" -- interact to form Secondary Colors. Within Primary and Secondary Colors can be found ALL other colors visible to the human eye. Raw color contains NO DETAIL information about the IMAGE™ -- that is to say, RAW COLOR has no RESOLUTION -- NO BOUNARY OF PURPOSE, NO TRUE IMAGE™ DETAIL. Also notice that the BLACK AREAS -- like a GRAIN SHADOWS -- have NO DETAIL. More detail on color can be found at Wikipedia -- via the link above.

With movie film -- the default BACKGROUND-- in the absence of any COLOR or SHADOW -- is WHITE SPACE. WHITE AREA (thus named because it is made up RAW LIGHT in film -- which is also is made up of RGB colors -- see Figure above) contributes NO DETAIL to the IMAGE™ -- just like BLACK AREA contributes NO DETAL (see IMAGE™ above). The word AREA is important here because mathematically, the word AREA means Two-dimensional SPACE -- with both height and width. Unlike DATA FOR DETAILS in an IMAGE™, COLOR can ONLY be seen in multidimensional SPACE. Thus the term COLOR SPACE -- in TWO-dimentional images. (Video does NOT work like this)

Finally, since BLACK AREA "does take up space" by PROXY -- MASKING what would otherwise be either RGB COLOR SPACE data or other IMAGE™ DETAIL. Many OVER-SIZED video CODECs will try to GUESS what MIGHT EXIST inside the BLACK AREA by creating OTHER RGB PROXY data. Unfortunately, in the creation of this Two-dimensional data, the original ONE-dimentional data of the EDGE of the SHADOW is destroyed. This is why it is better to Match the MTF of the movie film with the MTF of the video format - which at the time of final display -- has the goal of creating a one-to-one -- GRAIN-to-PIXEL -- relationship -- after each grain (of 8mm film) has been magnified roughly 40 times -- for HDTV.

THE WAY VIDEO WORKS:
Video technology assumes ALL of the IMAGE™ has both color and detail AND that every POINT of Red, Green, Blue (RGB) Color is in fact THE DETAIL in the IMAGE™ AND in fact, BLACK is considered just another COLOR. Video also assumes that every POINT of RGB Color is surrounded by BLACK BACKGROUND -- with the absence of RGB COLOR being BLACK. (This is the exact opposite way film works.)

Video assumes DETAIL has the height and width of THREE RGB pixels AND surrounding black space. Because video has BLACK AREA associated with BOUNDARIES OF RESOLUTION -- which is in fact a ONE-dimentional -- a LINEAR concept mathematically -- VIDEO FORMAT SIZE will always be an issue in film-to-video Transfers.

COMPARISON OF VIDEO TO FILM -- IN A NUTSHELL:

Video assumes everything in an IMAGE™ has detail AND every UNIT OF VIDEO -- ultimately the AREA of 3 RGB pixels surrounded by BLACK SPACE on a video display.

Film assumes NOTHING in an IMAGE™ has detail -- except where BLACK AREA meets WHITE AREA -- at the EDGE of the GRAIN SHADOW -- all of which are surrounded by WHITE SPACE.

IN film-to-video TRANSFERS: The video CODEC "tries" to convert BLACK AREA into "IMAGE™ detail" -- based on AVERAGING ALGORYTHMS, when in fact BLACK AREA is BLACK. In the averaging process, the BOUNDARY of RESOLUTION is lost -- blended away into TWO-dimentional data.

In OVER-SIZED video formats, the SHADOW of the GRAIN becomes Hyper-Sized and is thus subdivided into a color matrix of many RGB components -- all of which become AVERAGE into the IMAGE™ -- as TWO-dimentional RGB color -- giving a GRAIN-ON-THE-GRAIN appearance -- which is NOT in the original film -- even at extreme magnification. Each GRAIN SHADOW becomes a greatly enlarged DIGITAL ARTIFACT -- which compromises the integrity of the original RESOLUTION BOUNDARY.

For this reason, it is better to choose a video format that sees a GRAIN SHADOW as a single RGB color data point, rather than seeing a GRAIN SHADOW, enlarging it, subdividing it, and "trying" to integrate it into IMAGE™ DETAILS.

It is within this context that film-to-video transfers are made.

IN SUMMAY:

THE MOST IMPORTANT THING TO KNOW ABOUT MOVIE FILM:

Only the "edges" of the SHADOWS of the grain and grain clusters -- create "IMAGE™ detail".
Stated another way...
"Detail in a film IMAGE™" is created ONLY where "shadow meets light" -- at the level of the GRAIN.
The BLACK AREA of the SHADOW of a GRAIN is "non-IMAGE™ detail" -- information NOT part of the original IMAGE™.

I call this SHADOW EDGE -- "true-IMAGE™ data" or "true-IMAGE™ detail".

In MOVIE FILM there is
NO "IMAGE™ detail" in the "raw light" -- where there is no color dye;
NO "IMAGE™ detail" in the "raw color" -- from the RGB dyes;
NO "IMAGE™ detail" in the "raw shadow" - from BLACK SHADOW of the GRAINS of silver halide.

THE MOST IMPORTANT THING TO KNOW ABOUT" The EDGES" of "The SHADOWS":

RESOLUTION
The "EDGE" of the" SHADOW" -- is ONE-DIMENTIONAL DATA.
That is to say -- mathematically -- to a computer -- "a LINE -- without any width".
There is NO width or height to the EDGE -- "where SHADOW meets LIGHT";
yet, ironically, where shadow meets light -- is where IMAGE™ DETAIL" originates in MOVIE FILM.

In layman's terms, the "edge" is a "boundary" -- where shadow ends and light begins.
The shadow is not the detail, the light is not the detail. Where the two meet, IS the detail.

All this may sound theoretical, but to a computer CODEC (Coder Decoder) which does nothing by "crunch numbers" and converts them to video, these are critical facts -- which brings up another point.

CODECs are NOT designed to hand video from "movie film". They are ONLY designed to handle video -- which originated from "live video" -- light that bounced off objects moving through time and space.

CODECs are NOT designed to process "non-IMAGE™-detail" within an IMAGE™. The concept of "non-IMAGE™ detail" does not exist in the world of CODECs.

To the contrary, CODECs are designed to translate and integrate EVERYTHING in an IMAGE™ as if it were "reflected light". This was never a problem so long as the video CODEC was BELOW the MTF of the movie film. This is NOT a problem for LARGE FORMAT movie films, which still have MTF values which exceed that of most video formats.

However, in the world of SMALL FORMAT MOVIE FILM, over-sized video formats present a technical challenge -- safeguarding the integrity of SHADOW AND PARTICAL RESOLUTION -- WITHOUT ARTIFACT POLUTION. Preservation of the mythical, yet mathematically real "edge" or "boundary -- between shadow and light.

The BEST way to do this, is to CAPTURE video from FILM -- matching the MTF of the Film with the MTF of the Video format.

THE SECOND IMPORTANT THING:

In MOVIE FILM, "raw light", "raw colors" and "raw shadows" are TWO-DIMENTIONAL DATA.

That is to say -- mathematically -- to a computer -- ""raw light", "raw colors" and "raw shadows"
occupy space -- having both "height and width".

In MOVIE FILM there is
NO "data about detail" in "raw light",
NO "data about detail" in "raw color"
NO "data about detail" in "raw shadow".

The integrity of The integrity of SHADOW AND PARTICAL RESOLUTION depends on
NOT mixing or blending "raw light/raw color" with "raw shadow" -- to preserve the "ONE-dimentional" position of "The EDGE" -- where shadow and light come together.

WHY IS THIS IMPORTANT?

Blending TWO-dimensional data (light, color, and shadows) onto ONE-dimensional data (IMAGE™ detail) -- DESTROYS the ONE-dimentional data (IMAGE™ details).
This happens when "raw light or raw color" is blended with with "raw shadow" data.

IN VIDEO Many things can DESTROY SHADOW AND PARTICAL RESOLUTION -- that is to say
IN VIDEO Many things can mix "raw light / raw color" data with "raw shadow" data" as follows:

Over-compression,
over-sizing (hyper-formating),
bad focus,
over-processing the IMAGE™,
multi-generation duplication,
digital zooming -- to name a few.

In the final analysis, the "compression" or "expansion" of TWO-dimensional data ("raw light / raw color" data with "raw shadow") onto ONE-dimentional data (IMAGE™ details) -- DESTROYS the integrity of ONE-dimentional data (IMAGE™ details) -- shifting "the boundaries" of object to where they should not be.

THIS HAS A PROFOUND EFFECT ON RESOLUTION. as seen below:

FIGURE: Why does this frame -- from movie film -- "look funny"?
Copyright 2010 Bruce Mayfield all rights reserved.

We over compressed this frame to show you what BAD compression looks like -- AND ITS EFFECT ON RESELUTION; however, spreading TOO LITTLE DATA into TOO BIG A VIDEO FORMAT -is "like spreading to little butter over too much toast" (quote from Lord of the Rings).

This is the same "optical reason" an "out-of-focus" IMAGE™ "looks bad". Blending TWO-dimensional data onto ONE-dimentional data -- DESTROYS the integrity -- THE RESOLUTION - THE BOUNDARY OF RESOLUTON -- THE EDGE -- of the ONE-dimentional data.

The difference is, once data is lost due to over-compression -- or "hyper-formatting" -- it is lost forever -- just like capturing an IMAGE™ "out-of-focus".