Maya Renderer Overview

Posted By on February 29, 2012

My apologies for the long delay in this posting; I have spent an unfortunate amount of my free time lately becoming an expert on debilitating pains of the stomach.

What I was talking about last time was types of renderers in general. Now I’d like to talk about renderers more specifically, by sharing with you the results of some tests I have performed.

Bear in mind that none of my tests were exhaustive, and all of my notes here are primarily first impressions. I urge you to perform your own tests before making any final decisions or spending any money, but perhaps my reviews will at least give you ideas for the types of tests and research to do.

Comparison: Mental Ray

To start with, I rendered an image in Mental Ray to use as a test. I’ll be using this scene as a general comparison for the other renderers.

Mental Ray comparison image

Render time: 7:12

As you see, it’s a bit grainy and noisy, but it’s a fairly fast test for the scene that’s set up here. Note that this test scene has no lights; all illumination is provided via Final Gather from the floating incandescent cube, and a linear workflow ensures that the illumination is as close to physical accuracy as possible.

Test #1: Fryrender

I decided to start testing Fryrender because I was experimenting with methods of re-lighting during compositing (broadly discussed in Question 2 of this previous posting), and found a video demonstration of Fryrender Swap (which I linked to in my last posting), which was, frankly, amazing. Unfortunately such amazing functionality comes at an enormous price.

Render Comparison

Fryrender comparison image

Render time: 7:32

The lighting is very different from Mental Ray. Likely more ‘physically accurate’, but in this instance it doesn’t look as good. And, as with most unbiased renderers, there are almost no settings with which to adjust the render calculations. The full extent of your control is what you can do in post.

Unfortunately Vignetting is on by default, but that can be turned off it you remember to— however, as you can see from the above render, I didn’t.

Unique Features

Even without Swap, you can adjust the brightness and colour of emissive surfaces in post without having to re-render. Unfortunately the utility of this is drastically reduced by the fact that the sliders only affect an extremely small preview window. To update the main render view, you must click a button. Updating is virtually instant, but there does not appear to be a keyboard shortcut to do it, so you will likely get carpal-tunnel fairly quickly from prolonged use as you move your mouse back and forth from the button to the sliders.

With Fryrender Swap, the feature list is impressive; however there is no demo of Swap, so all I could see was the base renderer. And in the base renderer, the ability to adjust emissive surfaces is all that is offered in the way of unique features.

Maya Integration

This is where Fryrender falls flat on its face and begins flailing around, screaming about a broken nose.

To start things off, RandomControl (the creator of Fryrender) appears to have lost interest in development a few years ago, when they started work on Arion, their GPU-accelerated renderer. As a result, the plug-in for the demo version (Fryrender 1.5) only supports versions of Maya up to 2010. The latest version (Fryrender 1.6) has a plug-in for more recent versions of Maya up to 2012—but the plug-in has been ‘in beta’ for over six months. Unlike Google, RandomControl notes on their download page that when they say beta, they mean it:

Warning: These plugins are meant to be used with the corresponding Beta version of our products. Note that Beta software should not be considered production-ready. We encourage you to use the stable release version of our plugins, unless you need a feature that is only available in the Beta version, or for testing purposes.

Luckily I have a copy of Maya 2010, so I was able to evaluate Fryrender 1.5. It was buggy and frustrating to use. I wrote RandomControl and asked if I could please evaluate Fryrender 1.6 with Maya 2012, since Maya 2010 was difficult to work with. They did not have the courtesy to respond.

So finally, with apparently the best and most current version available to me (and possibly even to paying customers), I slogged through evaluating Fryrender with Maya 2010. And oh boy, what an experience.


  • Fryrender does not support lights of any kind.
  • Fryrender can recognize parts of the default Maya shaders, but only small parts. Texture positioning and tiling data, for instance, are ignored.
  • Fryrender has a great material editor in Maya, but no viewport preview. Objects with Fryrender shaders applied to the appear as green in the viewport, regardless of settings.
  • If you save the scene as a Maya ASCII (.ma) file, all Fryrender data is wiped from the scene (including shaders). Fryrender only saves data in Maya Binary (.mb) format.
  • Before every render, you must export your scene to the standalone Fryrender application. This can take significantly longer than your average test render on complex scenes. (This is especially frustrating because of the lack of any kind of preview of shaders in the viewport.)
  • I can’t recall if I tested fur, hair, or fluids, but it doesn’t appear to support them.


Architects and hobbyists who are willing to put up with the frustrations may love this renderer to pieces. For anyone who might have to rely on this renderer in a production environment, the poor integration with Maya, aggressively disinterested support, and sub-optimal workflow are simply insurmountable.

If development was still ongoing, I’d call this a renderer with a very bright future; it feels like a fantastically ambitious effort from people with vision who simply had to compromise far, far too much along the way. If the kinks could be worked out, this renderer could be amazing; and perhaps it is for other 3D packages. For Maya, however, it needs a lot of work that it doesn’t appear that it will ever get.

Test #2: Maxwell Render

Fryrender didn’t leave me with a good opinion of unbiased renderers, but I have heard great things about Maxwell Render from many different sources, so I chose not to be deterred. I was rewarded with a very pleasant surprise; Maxwell is precisely what it says on the package.

Render Comparison

Maxwell Render comparison image

Render time: 7:29

The lighting is, again, very different from Mental Ray, but almost identical to Fryrender (only without the auto-vignetting). As before, it doesn’t look as good as the Mental Ray image and there is little you can do to change the appearance other than editing it in post—but it’s likely more physically accurate than the Mental Ray render, and reality rarely cares what does or doesn’t look ‘good’ to us.

Unique Features

Unfortunately Maxwell Render doesn’t have anything quite as (theoretically) cool as Fryrender Swap, but keeping with the drive for unbiased renderers to try adding GPU acceleration, Maxwell Render has an addition called Maxwell Fire which adds real-time previews renders to your viewport. I have not personally tested this feature, but I have heard generically good things.

With some easy and fast set-up in Maya, lights (yes, it supports those) and some material colours can be adjusted in post, but textures cannot be modified.

Maya Integration

Extremely good.

Installation is easy and smooth. The renderer itself is a standalone application—but since the export process from Maya is fast, automated, and seamless, it feels like exporting before rendering is a feature that allows you to render while you keep working in Maya, rather than a terrible hack like it did in Fryrender.

Maxwell interprets Maya materials acceptably, though using advanced features requires the use of Maxwell’s own shaders. These shaders are nicely integrated into Maya and preview in the viewport.

Supports hair/fur, though seemingly not fluids (did not test).

Before meeting Fryrender I didn’t think I needed to say, but Maxwell Render supports spotlights quite well—though with reduced settings.


Maxwell is, overall, a very good unbiased renderer. If physical accuracy is important to you, you could definitely do a lot worse.

If all you’re looking for is a good-looking end product, you may (depending on your circumstances) be better advised to go with V-ray or just stick with Mental Ray; but Maxwell is certainly worth considering.

Test #3: V-ray

I have heard many great things about V-ray from many sources; often in the context of “wow, I switched from Mental Ray to V-ray and I can’t believe I didn’t do it sooner! This is great!” So, as you may expect, I had high expectations. I was neither disappointed nor surprised.

Render Comparison

V-ray comparison image

Render Time: 10:22

The V-ray evaluation version is capped at 600×450, and V-ray is a biased/unbiased renderer, so I wasn’t able to get quite as similar a render time here. As you can see, I ran over by a few minutes on this image, but it’s a fair comparison.

This image was created mostly using the Nederhorst settings, which Andrew Weidenhammer talks about fairly extensively on his lovely blog. Andrew has (recently, at least), been primarily dedicating his blog to free tutorial videos he has created showing how to use V-ray in Maya.

Render time would have been a bit lower if I could have used a light cache for my secondary bounces, but the high contrast in the peg forest seemed to disagree with my cache settings, and some artefacts appeared. Rather than spend time troubleshooting, I used Brute Force instead. More render time, less of my time.

Unique Features

Integrated support for Spherical Harmonics. (I did not test this feature.)

Maya Integration

Almost perfect, though with a few minor annoyances for me.

Unlike the unbiased renderers, V-ray is not a separate standalone renderer—it is fully integrated into Maya, and even uses the Maya render viewport. Unfortunately, the way it uses the viewport leaves a little to be desired—specifically, it only delivers 8-bit images to Maya. If you’re fond of using a linear workflow like me, this will be a frustration to you since it means that most of the preview renders you see will have banding in the dark areas that isn’t really there.

V-ray has its own render window which is fairly good, and offers some awesome features; but also has some major limitations as well, which I won’t go into here.

The version of V-ray I previewed also had no support whatsoever for Fur, which is a severe inconvenience. Since that test, however, V-ray 2.0 has been released, which claims to fully support Maya Fur. I have not tested this myself.

Maya shaders are translated mostly well, though a few settings don’t do what you’d expect. V-ray shaders are very good and fully-featured, so that is hardly a downside.


Overall V-ray is clearly a very solid renderer, and is mostly similar to Mental Ray. To my mind that ends up being a slight downside, however; it is so similar to Mental Ray that it just doesn’t seem worth the effort/expense of switching over.

The shader and linear workflow interface in V-ray seems generally slicker, but V-ray’s light caches don’t seem quite as good as Mental Ray’s Final Gather feature. V-ray might be slightly easier to set up on a per-render basis, but Mental Ray provides a little more control. V-ray seems a little easier to use, but Mental Ray has much better documentation.

At the end of the day, for me, Mental Ray is what I’ve been using, it’s here now, and I don’t have to worry about it not supporting some feature in this or any other version of Maya. (Except Ptex, but I can live with that for now.)

Test #4: Renderman for Maya

It is generally agreed that Pixar consistently turns out some of the best animation and CG work in the industry. And the renderer they use to do it is Pixar’s Renderman. With an endorsement like that, I’d always been a bit confused as to why it seemed that not many studios used Renderman as their primary renderer.

After testing it for myself, it seems a lot less mysterious. It’s really more of a framework than a renderer, per se—and a very specialized one at that. Bring your own shader artists, and hope you can figure out a way to do without raytracing.

Render Comparison

Renderman comparison image

Render Time: 5:13

This render is the smoothest of the test renders, though not without its own artefacts—most notably the darkness in the corners, which is essentially the shadowed area behind the walls bleeding through. Interestingly, this scene was rendered completely without raytracing; Renderman achieves extremely good approximation of Final Gather (including colour bleeding, not shown here) through the use of point clouds and brickmapping.

It’s interesting and impressive technology, if you can get away with using it. Unfortunately, it requires a lot of time to set up properly.

Unique Features

Renderman supports some very advanced rendering solutions, such as Point Clouds, brickmaps, and Ptex.

With the touch of a tickbox, Renderman will adaptively subdivide models at render time based on sampling rate to always provide perfectly smooth renders even at 4k resolution—and it does it fast.

Renderman has outstanding support for motion blur.

Renderman’s extremely intuitive and easy to use quality settings means not one CPU cycle need be wasted.

Maya Integration

Seamless. Perfect. Every aspect of Maya shaders transfers over, and if you want to take advantage of one of the awesome features in Renderman, you simply use the contextually-created custom menu in the Attribute Editor to add custom attributes to any shaders and/or objects as needed. Since custom attributes are already deeply integrated into Maya, this approach works perfectly.

The main downside to Renderman is that it doesn’t translate Mental Ray shaders; and since Renderman doesn’t come with any shaders of its own(!), that means that a lot of more advanced rendering functions aren’t readily available. If you want good fresnel falloff on your reflections, you need to use a camera sampler node and hook up the shader network yourself (or write your own shader).

Maya Hair, Fur, and Fluids are supported, though Maya Fur has some odd quirks. Nothing major, but it’ll just look different (mostly better, but not always) in Renderman than in any other renderer.

Unfortunately, there is virtually no support for using raytrace lighting. Where there is support, it’s obscenely slow and noisy. You could probably get around this by writing custom shaders? I didn’t have the weeks it would have taken to test.


Some great rendering features, downright amazing support for motion blur, and able to output high-resolution images in record-breaking time. Unfortunately, however, there is virtually no automation. If you want Sub-Surface Scattering, you need to write your own shader. If you want raytracing, live without it. If you want shadows that get blurrier the further they are from the light, there may be a trick for that, maybe.

If you ever find yourself in a position where your render wall is sucking up the power from three nuclear power plants, you’re using jet engines for cooling, and the heat from the facility is visible from space, just hire a few thousand artists and give them this renderer and all your problems go away.

If you’re a smaller shop or sole-operator and have more render capacity than you have people to use it, pass on this one.

Biased Vs. Unbiased

Posted By on January 4, 2012

When assessing a renderer, one of the most important things to find out about it is whether it is biased or unbiased. A few renderers unhelpfully claim to be biased/unbiased, but this is just another way of saying biased with delusions of grandeur. If a renderer has the ability to create biased renders, it can be considered a biased renderer, even if it can masquerade as an unbiased renderer with the proper settings.

Biased renderers

If you’ve worked with 3D applications, you’re probably already familiar with biased renderers (whether you know it or not). Mental Ray is a biased renderer, as is the built-in renderer for most 3D packages (Maya, 3DS Max, Blender, etc.). Biased renderers tend to be fast, pretty, and filled with render settings. ‘Scanline’ has meaning to these renderers, as does ‘shadow map’.


Comparatively fast, and gives the user a large degree of control over how rendering proceeds.


The biggest downside to biased renderers that I’ve found is that most of them still don’t natively work well with gamma, which can make it very difficult to get realistic results. A good linear workflow, however, makes this problem simply disappear.

Unbiased renderers

An unbiased renderer is more like a physics simulator than a renderer. The basic concept is that the computer uses a completely accurate and realistic light simulation to trace the path of a single ray for each pixel (one ‘pass’), and then it traces another ray, and then another, and so on. As the renderer runs, the image you see in the framebuffer slowly gets sharper, more accurate, and less noisy. This process could continue indefinitely. In fact, if you let the simulation run long enough (somewhere between months and years), you would have a perfectly accurate lighting model for that one frame—identical in every way to real life.

Another way of thinking of unbiased rendering is full raytracing. Unbiased renderers really aren’t big on render settings. If you set the number of ray bounces, that’s a bias, and antithetical to unbiased rendering. If you turn off reflective or refractive caustics, that’s a bias. If a material does not accurately conserve energy, that’s a bias. In the opinion of the creators of Fryrender, even something as simple as a spotlight is a bias, and not allowed—though Maxwell Render relents on this point, at least.

Unbiased rendering provides, without a shadow of a doubt, the most realistic render results you will ever see… eventually.


Unbiased renderers seem primarily focused on one thing: Highly accurate and realistic architectural renders. A few of the unbiased renderers I looked at even came with the ability to set your location, date, and time of day. Presumably so that architects can see precisely where sunlight will fall throughout the year. Impressive, yes; but of somewhat specialized utility.

Another benefit to unbiased renderers is that because they’re (comparatively) simple, they’ve been leading the way in both GPU acceleration and post-render modification. This video showing RandomControl’s Fryrender Swap, for instance, is frankly nothing short of amazing. After rendering, you can not only change materials, you can change textures and have that accurately show up in reflections, refractions, and even reflected light. You can add normal maps to a texture (which I would have thought too much of a bias for Fryrender, but apparently not) after rendering the scene. And you can do all of that in realtime.

I can see where that would be invaluable for interior decorators. Sure, you can’t change the design of a chair in realtime, say—but you can easily change its colour or pattern.


Slow. Very slow.

Closely related to the slowness, the unbiased nature also causes some issues. For instance, unbiased renderers generally seem to do much better with natural light than artificial lights. I was shocked to discover that Fryrender does not support spotlights. If you want light in a scene, you need to either use the environment (sky) light settings or have some geometry with an incandescent material on it.

As a result, if you want to, say, render a room with spotlights in it, the only way to get a realistic light pattern from the spotlight is to model the entire spotlight, including the reflector, as in this experiment here. Note that the render time for that one image was 25 hours. That is somewhat less than ideal.

My conclusion

Unbiased renderers are cool, give incredibly realistic results, and most of them come with outstanding tools to adjust renders after the render has finished. Unfortunately, however, they’re just too damn slow for the work I do. Anyone doing architectural visualisations who is very concerned with accuracy would love an unbiased renderer and likely scorn a biased renderer. I, however, render animations, not just still frames—and 40 hours per frame is simply unacceptable regardless of how much control I have in post.

It’s biased renderers for me.

Pass Contribution Maps

Posted By on November 23, 2011

In an effort to improve our lighting and rendering workflow, I’ve been experimenting a lot with using passes and pass contribution maps. My results have been… somewhat less than fully satisfactory, though some useful concepts and techniques have arisen from my experiments.

Research Question 1:

Render passes in Maya are a method of extracting render components (diffuse, specular, etc.) from a single render without significantly increasing render time. Is breaking down a frame into passes sufficiently useful to warrant the disk usage and additional complexity?

Observation 1 – Pass Theory

Before going into the details of how passes work, it’s important to clarify what precisely passes are, and why you would use them. To be honest, I wasn’t entirely sure when I started—sure I’ve heard that they’re useful in some cases, but the additional control never seemed worth the hassle and time investment. As it turns out, combining passes is simple enough: light is additive, so each pass is simply added to all of the other passes (Ambient Occlusion being the main exception to this rule).

 Passes example

Only beauty pass

Mouseover for comparison.

This leads to an interesting question: if most passes are simply additive, could I instead subtract a pass from a final render? For instance, if I wanted to remove and replace the indirect lighting from a scene; could I simply render out an indirect pass and subtract it from my beauty pass?

Yes. Yes, I can, provided that the passes are accurate. And this can be very useful.

While the idea of subtracting passes from one another is intriguing and leads to some rather interesting tangents that I’m still exploring, there unfortunately isn’t much I can do with the standard component passes (such as those shown above). If I were to attempt compositing onto live action footage or compositing many layers together, the level of control offered might make it worthwhile. However, I have yet to professionally work with live-action footage, so for my purposes, the additional disk space and increased disk I/O generally reduce my overall productivity, not increase it.

Observation 2 – Not Officially Supported

So if the standard array of passes aren’t useful to me; what about some of the more nonstandard ones? Or what about just using the standard ones with advanced shaders?

Sadly, regardless of what one wants to do with passes or pass contribution maps, there’s a good chance that it’s not technically supported by some part of Maya or Mental Ray. The documentation for the mia_material_x_passes shader, for example, states that the following passes are not supported:

  • Incandescence
  • Indirect
  • Reflection
  • Refraction
  • Partial pass contribution map support

Reflections, refractions, and indirect lighting account for about 90% of what I want from the Mental Ray renderer, and 100% of what I want from the mia_material_x_passes shader. So the fact that the documentation claims that they’re not supported is… inconvenient, to say the least.

However, do not despair: my tests indicate that ‘not supported’ usually doesn’t mean anything—at least when using straight passes. For example, in the above image series the ball and the glass vase both have an mia_material_x_passes shader assigned to them, and those passes worked just fine.

However, the moment you start using pass contribution maps, I’m afraid you’ll find—as I did— that ‘not supported’ can mean a lot.  This segues nicely into my next research question:

Research Question 2:

If passes are additive, what about lights? Can pass contribution maps be used to render out a separate beauty pass for each light so that each could be individually adjusted post-render without increasing render time?

Observation 1 – Yes

In theory, it works. This is a valid workflow, and one that is supported by some renderers. My initial tests show that Mental Ray can be one of those renderers for a few specific use-cases.

For instance, this render directly out of Maya is somewhat… ugly:

Base beauty pass

But if I take the time to set up render pass contribution maps before rendering, then without adding more than a fraction of a percent to render time, that single render can additionally put out the following passes:

Terence passes

…And with the control that affords, I can adjust the lights in realtime in the compositing package until I end up with something a little more pleasant to look at:

Terence adjusted

Amazingly better.

For comparison, here are a few rollovers showing the combined pass outputs versus Maya’s beauty renders:

Only beauty pass

Mouseover for comparison.

Only beauty pass

Mouseover for comparison.

As you can see, the result of combining the passes is virtually identical to the direct beauty pass out of Maya—even after you adjust the lights. In the lower example, I’ve adjusted the lights in the compositing package by adding multipliers to each colour channel (RGB), then applied precisely those same modifications to the colour swatch of the lights in Maya. The results are the same.

So then, the theory is sound. And under some circumstances, it works wonderfully. Unfortunately, this is where the ‘not supported’ issue begins to come into play.

Observation 2 – No, it doesn’t work

Sadly, the lovely and useful technique I have just outlined only works in certain, fairly limited circumstances. Specifically, the following things work only partially:

mia_material_x_passes shaders

These reflect everything. That’s right, everything. regardless of how many lights you have selected as part of your pass contribution map, an mia_material_x_passes shader will reflect them all—and any attempt at additively combining them will result in reflections that are very, very bright.

As a partial workaround, it is possible to render out a beauty pass with a zero-intensity dummy light (if a pass contribution map has no lights in it at all, it will render all lights instead of none). This results in a pass with just reflections. If the reflections-only pass is then subtracted from each individual light pass, the result is a beauty pass without any reflections for each light. Then, after light adjustments have been made, the reflections can be added in again as a last step.

Unfortunately, this workaround means that none of the modifications made to lighting in the compositing package will show in the reflections. But, as is the case with Terence’s buttons above, sometimes reflections are small and insignificant enough that this is not a huge problem.

Note: I didn’t explicitly test refractions, but something tells me they’ll most likely have issues, too.

Final Gather

Like reflections (only not shader-specific), Final Gather is always full-on with all lights for every pass contribution map, regardless of what lights are actually linked to the pass. As with reflections, a partial work around does exist: Render a pass with a single zero-intensity dummy light, subtract the result from each individual light pass, then add it in back in again as a final step.

As with reflections, this means that no lighting changes will affect the Final Gather. And since Final Gather calculations tend to be a lot more visible and noticeable than reflections, this is a much larger issue.


Contribution Map example - Beauty

The scene setup here is simple. For geometry, we’ve got a mid-grey ground plane, a green-tinted incandescent light-emitting plane on the right (it looks white because it’s very bright), a standard Maya spotlight up above the scene, and some reflective black text on the left.

Shader-wise, the ground plane is a mid-grey Lambert, the emitter plane is a very very bright green Lambert, and the standard blinn and mia_material_x_passes text speak for themselves.

This should be easy. I’ll just split it into two passes: one for the spotlight and one for the light-emitting plane.

Contribution Map example - spotlight

Contribution Map example - glowing plane

Immediately, problems become clear. In the first image, there is no visible source of green light. The light-emitting plane is excluded from the pass contribution map, so the image should contain no green whatsoever—but, as you can see, it does. All of that green is from the Final Gather calculations, which almost completely ignore pass contribution maps.

Further, since the light-emitting plane is excluded, it should not be visible in any reflections. The standard Blinn shader handles this fairly well; while there probably shouldn’t be that much green in there, there is, at least, no white—which indicates that the incandescent plane is not being reflected. The mia_material_x_passes text, however, is reflecting quite a lot of white—and none of it should be there.

In the second image, the only light source is the green light-emitting plane; the lilac spotlight is excluded from the pass contribution map. So that raises the question: where is the lilac colouration on the mia_material_x_passes text coming from? In this case, it is a reflection of the lilac circle that the spotlight is casting on the ground in the other pass image. As before, the mia_material_x_passes is reflecting things that aren’t supposed to be there.

Also as before, the Final Gather is completely wrong too—though it’s less visible here. If you look very closely directly below the word ‘material’, you can see slight lilac tinting on the ground. Since there is absolutely no source of light in the lower image that isn’t green, this is, again, an example of Final Gather ignoring pass contribution maps.

So, there are at least two serious problems—but what can be done about them?

Reflections can mostly be fixed using the aforementioned technique of rendering out separate reflection-only passes and subtracting them out. Also as before, the reflections wouldn’t show changes made to lights in post, but they’d at least be correct.

The Final Gather problem, however, is insurmountable and impossible to bypass. The Final Gather pass will never react to any changes to lighting in post using this method; the only way to get Final Gather to blend correctly is via render layers—and that requires multiple renders of the same scene. Can I do that? Of course. Is it slow? Very.


Both passes and pass contribution maps, while useful in theory, are poorly implemented in Mental Ray. The concept of rendering out each light separately is lovely, and in the future I will likely accept the render time hit to set up separate render layers for different lighting groups. However, I doubt I’ll be using passes all that often (except for zDepth), and I won’t be using pass contribution maps at all.

Over the next few weeks I’ll be looking at the trial versions of V-ray, Fryrender, and probably Maxwell Render too. Random Control’s Swap looks fairly awesome, so I’ll be likely also be seeing how functional it is—though I do have some compatibility concerns.

Stay tuned for more on this story as it develops.

Model Sales

Posted By on November 1, 2011

The miasma of marketing has claimed me this past month, so I have been unable to even get the chance to open Maya, much less find something of interest with which to share. So, instead, I shall talk about our experiences here at Thaumaturgy with that holy grail of the CG industry, model sales.

Like, I am sure, many small studios and independent operators, we here at Thaumaturgy try and get the most mileage we can out of the models, characters, and rigs that we develop. Initially, it seemed that the best way to do that was to spend a little extra time on the development phase of said models, characters, and rigs, and ensure that they are in the best shape they can be not only to be used in the future, but also to be sold online.

As it turned out, that was perhaps not the best plan.

Thaumaturgy’s Models

We currently have seven models on the open market, with several more in near-sellable condition if we ever were to commit the time to finish them off. Sadly, we have not yet seen any indication that such time would be a good investment.

Our models (in alphabetical order) are:

Art Deco Furniture

Art Deco Furniture

Upload date: April 2010
Sales on
Turbosquid: 1
Sales on The 3D Studio: 0

Total sales: $120
Our take: $48

We originally uploaded this with a different (and much less appealing) main picture for $120 per sale. After a few months, we sold one, but no more. After about six months, we dropped the price to $90, and then after a full year we updated the sales picture to the one you see above (with some text overlay).

No further sales have occurred. I hope the one person who bought the package actually get some use out of it…

Art Deco Mural

Art Deco Mural

Upload date: October 2009
Sales on
Turbosquid: 2
Sales on The 3D Studio: 0

Total Sales: $16
Our Take: $6.4

Honestly, I didn’t think this would ever sell. Apparently, I was wrong—though not very wrong.

Since these sales are in USD, and our studio is in NZ, if you factor in the exchange rate, you could actually buy a pizza with the money we made from this model.

I don’t recall precisely how long this model took to create, but it was in the upper single-digit range.

Bow Tie

Bow Tie

Upload date: October 2009
Sales on
Turbosquid: 26
Sales on The 3D Studio: 2

Total Sales: $290
Our Take: $122

Our most popular item—and my goodness, how popular it has been. We have been steadily selling these at the rate of about one per month. Mostly, as you can see, from Turbosquid, where they’re cheaper (due to The 3D Studio’s minimum price).

Why bow ties? I don’t know. We created this model because there wasn’t a good bow tie for sale, and our competition is still slim—so we have sort of cornered the market on this particular model.

I had no idea the CG world needed so many bow ties.

Safety Glasses

Safety Glasses

Upload date: September 2011
Sales on
Turbosquid: 0
Sales on The 3D Studio: 0

Total Sales: $0
Our Take: $0

I was very proud of this model when we uploaded it. As a construction hobbyist who uses power tools on a semi-regular basis, I have used a lot of safety glasses in my life; and the design of these glasses is wholly original, and represents the safety glasses I would like to have myself, in real life.

Apparently, however, safety glasses are not quite as popular as bow ties. On the pizza metre, this model could scavenge a few grease-stained empty pizza boxes from a dumpster.

Solid Door

Art Deco Door

Upload date: November 2009
Sales on
Turbosquid: 0
Sales on The 3D Studio: 3

Total Sales: $45
Our Take: $27

As with many of the models we have uploaded, this represents something I would like to have in real life. I am not a fan of the modern hollow doors, that you can’t slam if your life depended on it. To me, a door should have heft, weight, and solidity.

After uploading this model, we sold three almost immediately—and not a single one since. On the pizza scale, this model could probably scrape out four pizzas, as long as we didn’t go for the gourmet menu.



Upload date: October 2010
Sales on
Turbosquid: 1
Sales on The 3D Studio: 0

Total Sales: $99
Our Take: $39.60

We originally uploaded an earlier version of this model for $200 Since then, we have edited and updated his design and facial features (to the better each time, in my view) three times, and steadily reduced his price to its current $99 price point. As a fully rigged and talking character, I personally feel that he’s one of the better valued male humans on Turbosquid. True, his bellhop uniform probably won’t see all that much use, but as I outlined here, (using Terence), it’s easy to change a character’s clothing.

So far, only one person appears to agree with me—and they didn’t bother to review the model. Since the character took ~300 hours to develop, the return on investment for this model is comparable to melting down $2 coins for their metal content.



Upload date: June 2010
Sales on
Turbosquid: 3
Sales on The 3D Studio: 0

Total Sales: $120
Our Take: $48

This is another model I didn’t really expect to sell. We initially downloaded a free model for the violin, but it turned out to be in such terrible shape that we simply couldn’t render it from any distance—so, using the shape as a rough guide, we made this. The neck is slightly too long, the curves aren’t elegant, and don’t get me started on the chin-rest.

And yet, this violin has brought in an equivalent amount of lunch money to the Art Deco furniture collection—and the violin was much, much easier (and faster) to make.


With only seven models on the market, our experience is likely not typical. We may simply be unlucky, or perhaps we’re not presenting our models correctly. Maybe there’s something in the writeup that is putting people off. Who knows? Certainly, however, our experience points to model sales being a total waste of time and money.

Buying models is great. Selling models is not.

Good luck!