Skip to content
English
  • There are no suggestions because the search field is empty.

IronPDF - Understanding CEF/Chromium Memory Usage in Long-Running Applications

IronPDF uses Chromium-based rendering, so memory used during PDF generation is often allocated outside the .NET garbage collector. Because of that, memory usage may remain high after rendering finishes, even when documents are disposed and GC.Collect() is called. In many cases, this is expected behavior and not necessarily a memory leak.

Applies to

  • IronPDF
  • IronPdfEngine
  • Windows and Linux
  • Docker, Kubernetes, VM, and server-hosted environments
  • .NET, Java, and applications using Chromium-based rendering

Why memory usage may stay high after rendering

IronPDF relies on CEF/Chromium for rendering. Chromium uses a large amount of unmanaged/native memory, which is separate from the memory tracked by the .NET garbage collector.

That means:

  1. Your application may correctly dispose of PdfDocument objects.
  2. The .NET runtime may correctly collect managed objects.
  3. But the overall process or container memory may still remain high.

This happens because Chromium often keeps previously allocated memory available for reuse instead of immediately returning it to the operating system after every render.

A simple way to think about it: Chromium behaves like a warehouse that keeps shelves ready after a busy shift instead of tearing everything down and rebuilding it for the next order.

Why GC.Collect() may not reduce memory usage

Calling GC.Collect() only affects managed .NET memory.

It does not force Chromium to immediately release native memory back to the operating system. So even if your code disposes documents properly and runs garbage collection, the memory footprint shown by Task Manager, Docker, Kubernetes, or New Relic may not drop right away.

This is especially important when:

  • the application is long-running
  • multiple render jobs happen over time
  • rendering happens inside a shared service or container
  • the engine is accessed from another runtime, such as Java calling IronPdfEngine

For Java integrations using IronPdfEngine, this also means the memory pressure may be seen in the engine process/container, not only in the JVM heap.

Does this always mean there is a memory leak?

Not necessarily.

This behavior is common for Chromium-based rendering engines. In many cases, the memory is being retained for reuse, not leaked.

This is usually considered expected behavior when:

  1. Memory rises during rendering activity.
  2. Memory later stabilizes at a higher baseline.
  3. Subsequent renders reuse that memory instead of causing unlimited growth.

It may need further investigation when:

  1. Memory keeps increasing without stabilizing under a similar workload.
  2. Container or pod restarts happen regularly because memory keeps climbing.
  3. Growth continues even after concurrency is reduced.
  4. A minimal reproduction shows the same pattern over time with a controlled workload.

Recommended strategies for long-running applications

If your application needs memory usage to stay consistently low, the best approach is usually architectural rather than forcing garbage collection.

1. Limit concurrent rendering

Multiple Chromium render jobs running at the same time can increase native memory pressure significantly.

If your application renders PDFs in parallel, reduce the number of concurrent render operations.

A simple .NET example:

private static readonly SemaphoreSlim RenderSemaphore = new(2);

public async Task<T> RunRenderAsync<T>(Func<T> renderWork)
{
    await RenderSemaphore.WaitAsync();

    try
    {
        return renderWork();
    }
    finally
    {
        RenderSemaphore.Release();
    }
}

You can wrap your IronPDF render call inside this throttled section so that only a limited number of render jobs run at once.

2. Use batch processing

If your workload renders many PDFs continuously, process them in batches instead of keeping one renderer workload running at full intensity forever.

For example:

  1. Render a fixed number of documents.
  2. Finish the batch.
  3. Recycle the rendering worker or engine process if needed.

This is often a practical option when predictable memory ceilings matter more than keeping a single renderer process alive indefinitely.

3. Run rendering in a separate worker process or container

For production systems, a good pattern is to isolate PDF rendering from the main application.

Benefits:

  1. The main app remains stable.
  2. The rendering worker can be restarted independently.
  3. Native Chromium memory is fully released when that worker process exits.

This is especially useful in:

  • Docker or Kubernetes environments
  • background job systems
  • API platforms that need strict memory control

4. Monitor the right process

When using IronPdfEngine or a remote rendering setup, make sure you are monitoring the memory of the rendering engine process/container, not only the calling application.

For example:

  • In .NET, check the host process and any rendering-related child/native process behavior.
  • In Java + IronPdfEngine scenarios, check the engine container memory separately from JVM heap usage.

5. Dispose documents promptly

Disposing documents is still important. It helps release managed wrappers and ensures your application is not holding onto objects longer than necessary.

However, document disposal alone should not be expected to immediately reduce Chromium’s native memory footprint at the OS level.

Best practice summary

If memory stays high after PDF generation, that does not automatically mean IronPDF has a memory leak.

With Chromium-based rendering, it is normal for memory to:

  1. increase during rendering
  2. remain reserved for reuse
  3. be fully released only when the hosting process or container exits

For long-running services, the most effective patterns are:

  • limiting concurrent renders
  • isolating rendering in a worker process/container
  • recycling the rendering worker after a defined number of jobs or after a time threshold

When to contact support

Please contact support if you can reproduce all of the following:

  1. Memory continues growing without stabilizing under a controlled workload.
  2. The issue still happens after limiting concurrency.
  3. The issue still happens when rendering is isolated.
  4. You can share a minimal reproduction project.

When opening a ticket, include:

  • IronPDF version
  • OS and hosting environment
  • Docker/Kubernetes details if applicable
  • programming language
  • render frequency and concurrency level
  • memory graphs or monitoring screenshots
  • a minimal reproducible sample