Magic Lantern Firmware Wiki
Advertisement

Parent:2.0.4 AJ



[AJ] Hi all - I was going 'blow' your mind with an 'exciting insight of how the 'all-in-one' ASM routine that did Edge Detection, Zebra under and over exposure, Vram -> Overlay colour matching and dithering, and a Focus-accumlator that could be used in future fun algorithms to get th 5d2 to record and keep something in focus .. but Wikipedia had a bit of a melt down and the page had to Nuked (thats AI for resetting it).

In the process of creating this - I didn't update wiki (because I am not a fan of new Wiki format). However, this information may help other who wana hack.

This is a quick 'mind dump'.


VRAM SEGMENTS[]

The 5D2 has 21 (ish) segments used by the DIGIC for video processing.

Some of these are shadow/active versions. Other are at different resolutions.

Up until this point, ML uses 'Segment' 14.

This are also segments (not listed here) that detail the Overlay memory, and Sprite memory (for the 'Zoom' rectangle).

struct mem_segment_struct
{
   unsigned int start;
   unsigned int end;
   unsigned int size; // rough guess of start of video image
   unsigned int pitch; // Bytes per horizontal line
   unsigned int pitch_Rec; // pitch when recording
};
static struct mem_segment_struct mem_segments[]=
{ // These are the sizes when LCD display is used //
   {0x01B0FF00-0x5A0*0x17, 0x01B9CE9C, 534, 1440, 1440}, // seg 0
   {0x04000080, 0x0415407C,1360, 2048, 3744}, // seg 1
   {0x10000080, 0x1015407C,1360, 2048, 3744}, // seg 2
   {0x1C00FF50, 0x1C097FFC, 578, 1440, 1440}, // seg 3
   {0x1C414000, 0x1C4F7FFC, 912, 1440, 1440}, // seg 4
   {0x1C4F5278, 0x1C53FFFC, 224, 1260, 1260}, // seg 5
   {0x1F60FF50, 0x1F69CE9C, 540, 1440, 1440}, // seg 6
   {0x21B0FF50, 0x21B9CE9C, 596, 1440, 1440}, // seg 7
   {0x24000080, 0x2414FFFC,1335, 2048, 3744}, // seg 8
   {0x31B0FF50, 0x31B9CE9C, 596, 1440, 1440}, // seg 9
   {0x34000080, 0x3415407C,1328, 2048, 3744}, // seg 10
   {0x41B0FF50, 0x41B97FFC, 578, 1440, 1440}, // seg 11 vram[0]

   {0x44000080, 0x4414FFFC,1344, 2048, 3744}, // seg 12 HD_Vram Bank_1
   {0x4C000080,                , 2048, 3744}, //        HD_Vram Bank_2
   {0x50000080, 0x5015407C,1328, 2048, 3744}, // seg 13 HD_Vram_Bank_3

   {0x5C00FF50, 0x5C09CE9C, 596, 1440, 1440}, // seg 14 vram[1]
   {0x5C414000, 0x5C4F7FFC, 912, 1440, 1440}, // seg 15
   {0x5C4F5000, 0x5C5576FC, 286, 1260, 1260}, // seg 16
   {0x5C578678, 0x5C5D7FFC, 320, 1260, 1260}, // seg 17
   {0x5F60FF50, 0x5F697FFC, 576, 1440, 1440}, // seg 18
   {0x64000080, 0x6415407C,1328, 2048, 3744}, // seg 19
   {0x74000080, 0x7415407C,1360, 2048, 3744} // seg 20
};



[AJ] Thanks to A1ex for noticing tha the addresses of half the segments are Cached .. and the other ones are the uncached version ( I had misread 0x04000080 ... this is in Cached memory).


(Cached) Segment 1 == 12 Segment (uncached version of same memory = Use this one!)

VRAM FORMAT[]

Each horizontal line of VRAM:

[Active pixel data] [Optional unused Pixels]

// In ML terminology the 'Pitch' = number of Active + number of unused Pixels

In my structure I call the Pitch the size in Bytes of each line. Why?

Not sure ... but I wish I'd just called it 'Bytes per line"!.


Each VRAM word (a word is 4 Bytes = 32 bits = size of each register on the ARM chip) is organised as:

[Uu Yy1 Vv Yy2] [Uu Yy3 Vv Yy4] ..... -> Memory address increases in Vram
Each Word actually has it byte order changed before writing to memory.
If you Load this the first address into the ARM, you will get MSB [0x Yy2_Vv_Yy1_Uu] LSB (note that bit order in each byte stays the same. So in 'Uu', 'U' is the top 4 bits, 'u' is the bottom 4 bits)


Each Vram work has 2 YUV vram pixels: [Yy1, Uu, Vv] and [Yy2, Uu, Vv]
The Y is the brightness (=luma).
The eye is very sensitive to brightness change - so there is twice as many Luma points.
The U and V components are differnence components.


After reverse engineering the Dryos Disp_Check() routine, I found the formula used to be based on ITU-R 601 standard. Heres a great page with the formula to convert YUV -> RGB:

http://softpixel.com/~cwright/programming/colorspace/yuv/

The Canon formula are withing 0.1% of these (Canon bizarrely has some rouding errors in its code and decided to calculate everything in base 10 .. rather than binary. (nice and slow calcs too))


If you cast your eyes back at the table above:
{0x5C00FF50, 0x5C09CE9C, 596, 1440, 1440}, // seg 14 vram[1]

So when the LCD display is active, there are 1440 bytes per Vram line = 720 pixels.
The LCD display is only 720 pixels .. so thats perfect I hear you say.
Maybe .. but what if you want to fucus the image that is being recorded with 1920 horizontal pixels.
Look at segment 1.
{0x04000080, 0x0415407C,1360, 2048, 3744}, // seg 1
(2048 Bytes =1024 pixels when not recording. 3744 Bytes = 1872 Pixels when recording).

This discovery lead me to experiment with the option of displaying the Vram using the rather limited Overlay palette (16 bits of YUV -> 8 bits of colours selected by Canon. Some of which they don't even use!!!).

Here was my first attempts at displaying the vram (and also my first attempt at using a Logitech web-cam to record 5D2 output to a LCD screen. )

http://www.youtube.com/watch?v=kMMWeouFqsA


OVERLAY FORMAT[]

Each horizontal line in the Overlay has space for 960 pixels. // ML 'pitch' = 960
When is standard def modes, only 720 pixels are active.
When hooked up to an external HD display and NOT RECORDING all 960 pixels are used.


For the Overlay (ie LCD display), there are 8bits per pixel.
[B1 B2 B3 B4] .. [B5 B6 B7 B8] ..-> Memory address increases in Overlay

Like the Vram memory, the Bytes of Pixels are reordered before being loaded into the ARM.
The first word would look like this MSB [0x B4_B3_B2_B1] LSB


During startup, each of the Overlay palettes YUV and T (Transparency) component are loaded from DIGIC memory into a structure. When dispcheck() (DryOs routine to dump the Overlay in a *.bmp file on your CompactFlash), it has to convert the YUV values into the RGB equivalent, then store these in the bmp header for pc to display (the *.bmp file is stored with 8 bits of colour for each pixel, and these bits point to the colour in the header).


Here's the formula that Canon uses (mentioned also in the Vram section).

/*********************************
 * R = Y + 1.403V'               *
 * G = Y - 0.344U' - 0.714V'     *
 * B = Y + 1.770U'               *
*********************************/


[AI] See JPEG File Interchange Format Version 1.02 : http://www.w3.org/Graphics/JPEG/jfif3.pdf

ZEBRA[]

After a bit of a play with the ML edge detection .. I was not convinced that I would use it for passive focusing.
After a bit more head scratching, my thoughts on what would be required (if possible):

+) A low lag Zebra (the ED in Zedbra was because it was doing EdgeDetection) over Over & Under exposure.
On the 5d2 .. this ASM code works at between 24-40 fps with everything else switch off. (ie fast enough).
+) A smaller checker board size. (using the standard 8x8 pixels made it look blocky, and so I use 4x4).
+) Switch to detail on the single pixel level (like the no longer include ASM ED .. there is more detail)


OVERLAY / FOCUS assist
[]

My intention is to break up the whole screen ZEDBRA code into:
-> CROPPED_BOX ( Zebra )
-> OPTIONAL ( 'Magnified' Vram image from Segment 1 )

Something a bit like this: http://www.youtube.com/watch?v=vx_Aefa6qVo

I don't want to get too ahead of myself here .. as I am currenlty sorting out (debugging) the ASM for the Vram magnification. I have a GUI interface in mind... I'll return here in a few weeks (ASM takes ages to sort out). :D

Happy New Year

AJ

Advertisement