First couple of tests for review


Hi all, here are my first couple of test clips for review. I am a motion graphics designer so my focus on teasting is render speed and flexibilty. So far I have battled somewhat to get a render out quickly with sufficient quality. I suspect that is mostly due to me still getting used to the renderer.

RGB Balls

Title Render Grey

Title Render with mesh lights

I would love some thoughts and suggestions.


Interesting, I see you did a render engine comparison with your RGB Balls scene.

When I compare a frame of the animation I can see quite some differences. Most notably the Glass material in the Renderman version (center image) seems strange.

The appleseed version (top) shows a dark spot on the green ball. Maybe you have a too low bounce limit set so that rays which transmit through the glass ball are not fully propagated.


This comparison is super interesting.

Would you mind providing us with the Blender and appleseed files so that we can make sure everything is setup optimally?


Yeah, I am exploring all the different render engines for Blender. I want to get up on the new rendering techniques and to show colleagues about the options out there.

The Renderman is not great. It is quite a beast and the glass is not playing well at the moment. Again, a lot of this is down to user inexperience with the tools.

No problems with sharing the files, I will upload them when I get home. I would like to get a couple of setups that allow for a speedier work flow or a higher polished look.


Here are the links to the RGB Balls Blender and Appleseed files. Shout if anything is missing.


Thanks for the scene files.

The dark spot behind the glass sphere is simply due to caustics being disabled. In other words, since caustics are disabled, the glass sphere is considered to be opaque to rays that bounced off the green sphere.

Did you enable caustics in the other renderers, in particular in Cycles?


The simplest way to let light pass through the glass without creating caustics is to make the glass sphere invisible to diffuse and shadow rays: that way, the green sphere will receive direct illumination from the lights and the environment.

Of course, the even simpler alternative is to enable caustics and in some cases this is indeed the best option. However caustics might lead to additional noise. One way to keep that noise under control is to clamp ray intensity using the Max Ray Intensity (MRI) setting. Note that Max Ray Intensity may come handy even without caustics.

Another observation: A lot of the noise is simply due to glossy reflections. Reducing the number of light and environment samples and increasing the overall number of samples (and thus the number of camera rays per pixel) helps getting rid of that noise.

Here’s a rendering of your scene with the following settings:

  • 12 passes (same as in your scene)
  • 16 samples per pass (was 3)
  • 2 environment lighting samples (was 9)
  • 2 direct lighting samples (was 12)
  • MRI set to 1.5 (disabled in your scene)


Checking your file I saw that the bounces were actually unlimited. @franz was correct, you didn’t had caustics enabled. May I suggest to try with some changed settings:


Another tip is to use AOV (render passes which show the various contributions to the final image).
These are helpful to distinguish the different noise contributions (diffuse, glossy).



@Mango3 As far as I can tell from the appleseedz file @fxgogo provided, bounces were not unlimited:



My failure, I used the new blenderseed-developer version. When I loaded the scene file with it it seems the settings are considerably changed.


Wow guys, thanks for all that feedback. So, as much as possible, I have tried to avoid caustics. I am really learning about this ‘real life’ methodolgy with these new renderers after so many years of scanline rendering with many passes to get the look I was looking for.

So, as I understand it, you are setting samples for the lights in the scene and then samples for the camera as well? I am still a bit unsure about this.

Would you sugget I render in Appleseed Studio or directly in Blender? Are there still differences in features and the output?

Thanks again for the analysis, I will send an updated render shortly.


So, as I understand it, you are setting samples for the lights in the scene and then samples for the camera as well? I am still a bit unsure about this.

Like with most renderers, a good way to approach setting up a render is as follow:

  1. Start by figuring out how many samples/pixel are needed to get rid of primary ray noise: that’s noise caused by depth of field, motion blur and aliasing on directly visible geometric edges. Each of these samples results in one primary ray.

    Whether you use a single pass or split these samples over a number of passes is up to you. Splitting into multiple passes has the benefits of providing early feedback at the cost of slightly less efficient sampling.

  2. In appleseed, for each primary ray you get a single reflection ray (for glossy and specular surfaces), so the number of primary rays will also control the amount of noise in glossy reflections. In other renderers you can control how many reflection rays (per primary ray) are used to sample glossy surfaces. Eventually appleseed will also offer such a setting.

  3. Next you need to figure out how many direct lighting and environment lighting samples are necessary to get smooth soft shadows. Each of these samples results in one shadow ray. These samples are again per primary ray, so if you use 64 samples/pixel and 2 direct lighting samples, that will result in a total of 128 direct lighting samples per pixel.

Would you sugget I render in Appleseed Studio or directly in Blender? Are there still differences in features and the output?

There won’t be any difference in the output. There are however one or two settings (e.g. roughness clamping) that are available in but aren’t exposed in blenderseed. That said doesn’t allow to render a whole sequence so you’re probably better off rendering from Blender.

We’ll see to expose roughness clamping in blenderseed.


Here’s the result with the same settings as my previous render but this time with roughness clamping:

The main difference is the glossy reflection on the ground, the other differences are quite subtle.


I saw that, was there an effect on the render time between the two?

Thanks for that extra info.


I haven’t paid any attention to rendering times, sorry; that said I wouldn’t expect Roughness Clamping to have a significant impact on rendering time.

As an illustration, here is what Roughness Clamping does: after a glossy reflection on the floor, this is how rays see the transparent sphere:

(That was rendered without Roughness Clamping, but as you can see it leads to the same reflection of the glass sphere on the floor).


Ok, I see. Re render times, I can’t remember the exact time, but a frame with the new setting took ± 1h 47min on my i5 laptop.


Wow, that’s long. About 20 minutes on my machine (3 years old Core i7, 6 physical cores).

What were rendering times with your original settings?

You should be able to reduce the number of passes in the new settings to match the render times of your old settings, but with less noise.

As for why it takes so long, maybe it’s due to the large HDRI (8k x 4k)? I’ll look into it.


Yeah, it is bugging me, as my older i7 will render much quicker. It is one of those Dell i5 7000 gaming laptops. The GPU cooling is fantastic, but I think they have severely throttled the CPU to stop over heating. When rendering the air cooling fans do not engage. I think I need to do some hacking…


Some note aside:
For Windows users there is a great, free tool called CPU-Z which gives you detailed hardware info about your cpu and memory (clock speed, cache settings, memory access latency, etc…).
It also has a cpu benchmark setting with some high-end cpu references to compare. So if you want to know what gains to expect with a new cpu, this is an easy tool.
For instance, if I would swap my laptop (4th-Gen. Core i7, 4 cores - 8 threads) with an AMD Ryzen Threadripper 1950, I would be able to see a ~5 fold increase in rendering speed.


I tried with a radical smaller HDR map (512x256) and render times were essentially identical.

I’ve profiled this scene and it seems that it is bound by pure ray tracing performances so things are pretty sane and that’s how fast appleseed will go for now. Our big avenues for faster rendering are (1) faster ray tracing via Intel’s Embree library, (2) better sampling, (3) eventually GPU rendering.