Posted By Wolfgang on February 29, 2012
My apologies for the long delay in this posting; I have spent an unfortunate amount of my free time lately becoming an expert on debilitating pains of the stomach.
What I was talking about last time was types of renderers in general. Now I’d like to talk about renderers more specifically, by sharing with you the results of some tests I have performed.
Bear in mind that none of my tests were exhaustive, and all of my notes here are primarily first impressions. I urge you to perform your own tests before making any final decisions or spending any money, but perhaps my reviews will at least give you ideas for the types of tests and research to do.
Comparison: Mental Ray
To start with, I rendered an image in Mental Ray to use as a test. I’ll be using this scene as a general comparison for the other renderers.
Render time: 7:12
As you see, it’s a bit grainy and noisy, but it’s a fairly fast test for the scene that’s set up here. Note that this test scene has no lights; all illumination is provided via Final Gather from the floating incandescent cube, and a linear workflow ensures that the illumination is as close to physical accuracy as possible.
Test #1: Fryrender
I decided to start testing Fryrender because I was experimenting with methods of re-lighting during compositing (broadly discussed in Question 2 of this previous posting), and found a video demonstration of Fryrender Swap (which I linked to in my last posting), which was, frankly, amazing. Unfortunately such amazing functionality comes at an enormous price.
Render time: 7:32
The lighting is very different from Mental Ray. Likely more ‘physically accurate’, but in this instance it doesn’t look as good. And, as with most unbiased renderers, there are almost no settings with which to adjust the render calculations. The full extent of your control is what you can do in post.
Unfortunately Vignetting is on by default, but that can be turned off it you remember to— however, as you can see from the above render, I didn’t.
Even without Swap, you can adjust the brightness and colour of emissive surfaces in post without having to re-render. Unfortunately the utility of this is drastically reduced by the fact that the sliders only affect an extremely small preview window. To update the main render view, you must click a button. Updating is virtually instant, but there does not appear to be a keyboard shortcut to do it, so you will likely get carpal-tunnel fairly quickly from prolonged use as you move your mouse back and forth from the button to the sliders.
With Fryrender Swap, the feature list is impressive; however there is no demo of Swap, so all I could see was the base renderer. And in the base renderer, the ability to adjust emissive surfaces is all that is offered in the way of unique features.
This is where Fryrender falls flat on its face and begins flailing around, screaming about a broken nose.
To start things off, RandomControl (the creator of Fryrender) appears to have lost interest in development a few years ago, when they started work on Arion, their GPU-accelerated renderer. As a result, the plug-in for the demo version (Fryrender 1.5) only supports versions of Maya up to 2010. The latest version (Fryrender 1.6) has a plug-in for more recent versions of Maya up to 2012—but the plug-in has been ‘in beta’ for over six months. Unlike Google, RandomControl notes on their download page that when they say beta, they mean it:
Warning: These plugins are meant to be used with the corresponding Beta version of our products. Note that Beta software should not be considered production-ready. We encourage you to use the stable release version of our plugins, unless you need a feature that is only available in the Beta version, or for testing purposes.
Luckily I have a copy of Maya 2010, so I was able to evaluate Fryrender 1.5. It was buggy and frustrating to use. I wrote RandomControl and asked if I could please evaluate Fryrender 1.6 with Maya 2012, since Maya 2010 was difficult to work with. They did not have the courtesy to respond.
So finally, with apparently the best and most current version available to me (and possibly even to paying customers), I slogged through evaluating Fryrender with Maya 2010. And oh boy, what an experience.
- Fryrender does not support lights of any kind.
- Fryrender can recognize parts of the default Maya shaders, but only small parts. Texture positioning and tiling data, for instance, are ignored.
- Fryrender has a great material editor in Maya, but no viewport preview. Objects with Fryrender shaders applied to the appear as green in the viewport, regardless of settings.
- If you save the scene as a Maya ASCII (.ma) file, all Fryrender data is wiped from the scene (including shaders). Fryrender only saves data in Maya Binary (.mb) format.
- Before every render, you must export your scene to the standalone Fryrender application. This can take significantly longer than your average test render on complex scenes. (This is especially frustrating because of the lack of any kind of preview of shaders in the viewport.)
- I can’t recall if I tested fur, hair, or fluids, but it doesn’t appear to support them.
Architects and hobbyists who are willing to put up with the frustrations may love this renderer to pieces. For anyone who might have to rely on this renderer in a production environment, the poor integration with Maya, aggressively disinterested support, and sub-optimal workflow are simply insurmountable.
If development was still ongoing, I’d call this a renderer with a very bright future; it feels like a fantastically ambitious effort from people with vision who simply had to compromise far, far too much along the way. If the kinks could be worked out, this renderer could be amazing; and perhaps it is for other 3D packages. For Maya, however, it needs a lot of work that it doesn’t appear that it will ever get.
Test #2: Maxwell Render
Fryrender didn’t leave me with a good opinion of unbiased renderers, but I have heard great things about Maxwell Render from many different sources, so I chose not to be deterred. I was rewarded with a very pleasant surprise; Maxwell is precisely what it says on the package.
Render time: 7:29
The lighting is, again, very different from Mental Ray, but almost identical to Fryrender (only without the auto-vignetting). As before, it doesn’t look as good as the Mental Ray image and there is little you can do to change the appearance other than editing it in post—but it’s likely more physically accurate than the Mental Ray render, and reality rarely cares what does or doesn’t look ‘good’ to us.
Unfortunately Maxwell Render doesn’t have anything quite as (theoretically) cool as Fryrender Swap, but keeping with the drive for unbiased renderers to try adding GPU acceleration, Maxwell Render has an addition called Maxwell Fire which adds real-time previews renders to your viewport. I have not personally tested this feature, but I have heard generically good things.
With some easy and fast set-up in Maya, lights (yes, it supports those) and some material colours can be adjusted in post, but textures cannot be modified.
Installation is easy and smooth. The renderer itself is a standalone application—but since the export process from Maya is fast, automated, and seamless, it feels like exporting before rendering is a feature that allows you to render while you keep working in Maya, rather than a terrible hack like it did in Fryrender.
Maxwell interprets Maya materials acceptably, though using advanced features requires the use of Maxwell’s own shaders. These shaders are nicely integrated into Maya and preview in the viewport.
Supports hair/fur, though seemingly not fluids (did not test).
Before meeting Fryrender I didn’t think I needed to say, but Maxwell Render supports spotlights quite well—though with reduced settings.
Maxwell is, overall, a very good unbiased renderer. If physical accuracy is important to you, you could definitely do a lot worse.
If all you’re looking for is a good-looking end product, you may (depending on your circumstances) be better advised to go with V-ray or just stick with Mental Ray; but Maxwell is certainly worth considering.
Test #3: V-ray
I have heard many great things about V-ray from many sources; often in the context of “wow, I switched from Mental Ray to V-ray and I can’t believe I didn’t do it sooner! This is great!” So, as you may expect, I had high expectations. I was neither disappointed nor surprised.
Render Time: 10:22
The V-ray evaluation version is capped at 600×450, and V-ray is a biased/unbiased renderer, so I wasn’t able to get quite as similar a render time here. As you can see, I ran over by a few minutes on this image, but it’s a fair comparison.
This image was created mostly using the Nederhorst settings, which Andrew Weidenhammer talks about fairly extensively on his lovely blog. Andrew has (recently, at least), been primarily dedicating his blog to free tutorial videos he has created showing how to use V-ray in Maya.
Render time would have been a bit lower if I could have used a light cache for my secondary bounces, but the high contrast in the peg forest seemed to disagree with my cache settings, and some artefacts appeared. Rather than spend time troubleshooting, I used Brute Force instead. More render time, less of my time.
Integrated support for Spherical Harmonics. (I did not test this feature.)
Almost perfect, though with a few minor annoyances for me.
Unlike the unbiased renderers, V-ray is not a separate standalone renderer—it is fully integrated into Maya, and even uses the Maya render viewport. Unfortunately, the way it uses the viewport leaves a little to be desired—specifically, it only delivers 8-bit images to Maya. If you’re fond of using a linear workflow like me, this will be a frustration to you since it means that most of the preview renders you see will have banding in the dark areas that isn’t really there.
V-ray has its own render window which is fairly good, and offers some awesome features; but also has some major limitations as well, which I won’t go into here.
The version of V-ray I previewed also had no support whatsoever for Fur, which is a severe inconvenience. Since that test, however, V-ray 2.0 has been released, which claims to fully support Maya Fur. I have not tested this myself.
Maya shaders are translated mostly well, though a few settings don’t do what you’d expect. V-ray shaders are very good and fully-featured, so that is hardly a downside.
Overall V-ray is clearly a very solid renderer, and is mostly similar to Mental Ray. To my mind that ends up being a slight downside, however; it is so similar to Mental Ray that it just doesn’t seem worth the effort/expense of switching over.
The shader and linear workflow interface in V-ray seems generally slicker, but V-ray’s light caches don’t seem quite as good as Mental Ray’s Final Gather feature. V-ray might be slightly easier to set up on a per-render basis, but Mental Ray provides a little more control. V-ray seems a little easier to use, but Mental Ray has much better documentation.
At the end of the day, for me, Mental Ray is what I’ve been using, it’s here now, and I don’t have to worry about it not supporting some feature in this or any other version of Maya. (Except Ptex, but I can live with that for now.)
Test #4: Renderman for Maya
It is generally agreed that Pixar consistently turns out some of the best animation and CG work in the industry. And the renderer they use to do it is Pixar’s Renderman. With an endorsement like that, I’d always been a bit confused as to why it seemed that not many studios used Renderman as their primary renderer.
After testing it for myself, it seems a lot less mysterious. It’s really more of a framework than a renderer, per se—and a very specialized one at that. Bring your own shader artists, and hope you can figure out a way to do without raytracing.
Render Time: 5:13
This render is the smoothest of the test renders, though not without its own artefacts—most notably the darkness in the corners, which is essentially the shadowed area behind the walls bleeding through. Interestingly, this scene was rendered completely without raytracing; Renderman achieves extremely good approximation of Final Gather (including colour bleeding, not shown here) through the use of point clouds and brickmapping.
It’s interesting and impressive technology, if you can get away with using it. Unfortunately, it requires a lot of time to set up properly.
Renderman supports some very advanced rendering solutions, such as Point Clouds, brickmaps, and Ptex.
With the touch of a tickbox, Renderman will adaptively subdivide models at render time based on sampling rate to always provide perfectly smooth renders even at 4k resolution—and it does it fast.
Renderman has outstanding support for motion blur.
Renderman’s extremely intuitive and easy to use quality settings means not one CPU cycle need be wasted.
Seamless. Perfect. Every aspect of Maya shaders transfers over, and if you want to take advantage of one of the awesome features in Renderman, you simply use the contextually-created custom menu in the Attribute Editor to add custom attributes to any shaders and/or objects as needed. Since custom attributes are already deeply integrated into Maya, this approach works perfectly.
The main downside to Renderman is that it doesn’t translate Mental Ray shaders; and since Renderman doesn’t come with any shaders of its own(!), that means that a lot of more advanced rendering functions aren’t readily available. If you want good fresnel falloff on your reflections, you need to use a camera sampler node and hook up the shader network yourself (or write your own shader).
Maya Hair, Fur, and Fluids are supported, though Maya Fur has some odd quirks. Nothing major, but it’ll just look different (mostly better, but not always) in Renderman than in any other renderer.
Unfortunately, there is virtually no support for using raytrace lighting. Where there is support, it’s obscenely slow and noisy. You could probably get around this by writing custom shaders? I didn’t have the weeks it would have taken to test.
Some great rendering features, downright amazing support for motion blur, and able to output high-resolution images in record-breaking time. Unfortunately, however, there is virtually no automation. If you want Sub-Surface Scattering, you need to write your own shader. If you want raytracing, live without it. If you want shadows that get blurrier the further they are from the light, there may be a trick for that, maybe.
If you ever find yourself in a position where your render wall is sucking up the power from three nuclear power plants, you’re using jet engines for cooling, and the heat from the facility is visible from space, just hire a few thousand artists and give them this renderer and all your problems go away.
If you’re a smaller shop or sole-operator and have more render capacity than you have people to use it, pass on this one.