wiredfool

Archive for February, 2007

Boy

Boy

No comments

L Series

L Series

No comments

3 buck Jasper

3 buck Jasper

No comments

Tomato Firmware

Ooh, this looks interesting. Yet another firmware for the linksys/buffalo wrt series routers (I have two), this one named for a vegetable. (or fruit). (whatever). The Tomato Firmware looks more shiny and has interesting bandwidth graphs and other such things built in.

Like I need a router to be shiny. It needs to work. Must not install the shiny.

No comments

What’s this light stand connected to?

What

No comments

New Book

New Book

No comments

Diagonal

Diagonal

No comments

On Colorspaces

and towards a more color managed workflow.

A while ago I posted about an article that had a lot of detail on colorspaces, profiles, and what works both on the web and in print. One of the takeaways from that was that my workflow was not color managed, except for (possibly) on the mac and probably on my laptop.

Which is a bummer, since the laptop has proved itself to have a really wonky color response profile, at least as compared to what gets printed and any other display I have access to. For example, custom white balance in the camera is better than doing anything by eye on this machine. Print matching was a nightmare. It turns out that there’s a monitor profile from 10.4.7 that is a lot closer to what the prints were producing, so I’m using that for now. But still.

The takeaways were that anything destined for the web or print probably should be in sRGB, that the camera should probably be in AdobeRGB (a wider color space), and everything at every stage should be tagged with the appropriate profile. Finally, I should get a colorimeter and at least get the best profile I can on this monitor.

I’m still using a raw converter that’s using the coregraphics filters to do my first cut at the images. It’s good enough for most of the web destined images, unless I need to do B&W conversion or edits. It turns out that it’s not that hard to hack in support for specific color spaces, instead of the generic ones.

Since I don’t see this well represented in google: Obj-C following:

CGColorSpaceRef cs, default_cs;
CGDataProviderRef profile;
NSDictionary *options;
float ranges[] = {0.0,255.0,0.0,255.0,0.0,255.0};

default_cs = CGColorSpaceCreateDeviceRGB();
profile = CGDataProviderCreateWithURL((CFURLRef) \
   [NSURL fileURLWithPath:@"/System/Library/ColorSync/Profiles/sRGB Profile.icc"]);
cs = CGColorSpaceCreateICCBased(3, ranges, profile, default_cs);

options = [NSDictionary dictionaryWithObject:(id)cs
   forKey:kCIContextOutputColorSpace];
		
context = [CIContext 
  contextWithCGContext: [[NSGraphicsContext currentContext] graphicsPort]
  options: options];

Where previously, I was passing in no options (or more precisely, nil), this time I’m passing in a dictionary specifying an output color space, initialized to a system profile. This will convert anything that is painted into the context into sRGB, and include the profile in the output image. It’s apparently also possible to use a Lab color profile, but I’m unsure how useful that would be since the only other Lab aware app I know of is Photoshop.

Looking at the camera data, the RAW files are tagged with AdobeRGB, 16 bit which is a bit of strangeness, since AdobeRGB is an 8 bit space. But, since it’s RAW, it really doesn’t matter, since the RAW files are open to so much interpretation anyway. If I do switch to JPEG, it should still be in AdobeRGB, so I’ll capture as much of the gamut as possible.

I’m curious how much of a difference this will make. It certainly feels like the right thing to do.

No comments

Flying

Flying

No comments

Lest you think that his eyes don’t open…

Lest you think that his eyes don

No comments

Next Page »