JEP 520: JFR Method Timing and Tracing Explained
-
Last Updated: January 28, 2026
-
By: javahandson
-
Series
Learn Java in a easy way
JEP 520: JFR Method Timing and Tracing introduces exact method-level execution measurement in Java Flight Recorder. Learn how timing and tracing complement sampling, how filters work, and when JEP 520 provides certainty over guesswork.
Modern Java applications are not short or simple programs anymore. They are long-running services, full of frameworks, layers, callbacks, and background work. To understand their performance, we already rely on powerful tools like profilers and Java Flight Recorder (JFR). Sampling profilers tell us where CPU time is spent, and JFR gives us a low-overhead, always-on view of runtime behavior.
Yet, even with all these tools, developers repeatedly hit a very practical wall.
At some point, everyone asks a deceptively simple question:
“How long did this exact method take to execute?”
Sampling profilers cannot answer this precisely. They infer time statistically, which is excellent for hotspots but unreliable for short-lived or infrequently called methods. Two methods might look identical in samples, even if one consistently runs twice as long. JFR events give deep insight, but until now, there was no direct, built-in way to measure the exact execution time of arbitrary Java methods without adding manual instrumentation.
Developers usually fall back to logging timestamps or using System.nanoTime() around method calls. This approach is fragile. It clutters code, introduces overhead, and is often removed or forgotten. Worse, it does not integrate cleanly with profiling data, making it hard to correlate timing with threads, stack traces, or other runtime events.
This is the gap that JEP 520 addresses.
JEP 520 does not try to replace profilers or JFR sampling. Instead, it adds a precise, method-level timing capability that complements them. The goal is exact measurement when needed, without rewriting code or relying on ad hoc logging. It lets developers ask targeted questions about specific methods and get reliable, high-fidelity answers that fit naturally into existing JFR-based workflows.
In short, JEP 520 exists because precision still matters. Sampling tells us where to look. Method-level timing tells us exactly what happened.
Sampling-based profiling has earned its place for a reason. It is lightweight, safe to run in production, and excellent at answering one very important question: what code paths appear most frequently while the application is running. For identifying hotspots, lock contention, or CPU-heavy loops, sampling is often the best possible tool.
However, sampling works by observation, not measurement.
A sampler periodically inspects thread stacks and then infers where time is being spent based on how often methods appear in those snapshots. This statistical approach is powerful, but it also defines its limits. Sampling can only tell us what shows up often enough to be seen. It cannot reliably tell us what happens infrequently, nor can it give precise timing for individual executions.
Because of this, sampling does not answer some questions that developers regularly care about.
It does not tell us the exact duration of a single method invocation. If a method runs once every few seconds and takes 200 milliseconds, it may barely appear in samples, even though it is clearly expensive when it runs. Similarly, if a method’s execution time varies widely depending on input or state, sampling smooths those variations into averages that hide important details.
Rare but costly calls are especially problematic. A slow initialization path, a fallback execution, or an error-handling branch may dominate latency when it occurs, yet remain invisible in a sampling profile. From the profiler’s point of view, it simply does not run often enough to stand out.
To bridge this gap, teams usually reach for logging, bytecode agents, or external APM tools. These solutions work, but they come with trade-offs. Logging timestamps adds noise and requires manual maintenance. Agents and instrumentation frameworks can be intrusive, require special deployments, and often introduce overhead that makes teams hesitant to use them in production. Operationally, they are heavy for what is often a very focused question.
This leaves developers stuck between approximation and intrusion.
JEP 520 exists to fill this exact gap.
JEP 520 introduces a focused enhancement to Java Flight Recorder (JFR): the ability to record the exact execution time of selected Java methods, directly inside the JVM.
The key idea is not broad profiling, but precise measurement.
With JEP 520, the JVM can be instructed to instrument specific methods. When an instrumented method begins execution, the JVM records a start point. When the method completes—whether normally or due to an exception—the JVM records an end point. The elapsed time between these two points represents the actual execution duration of that method.
This timing information is not approximated through sampling and is not collected via external agents or application code changes. Instead, it is recorded as JFR events, making method-level timing a first-class part of the existing JFR ecosystem.
JEP 520 defines two complementary kinds of events for this purpose, each designed for a different level of analysis.
The Method Timing event provides aggregated timing information for instrumented methods.
Rather than emitting an event for every invocation, the JVM aggregates timing data over the recording interval. This includes information such as how many times a method was invoked and summary statistics about its execution duration.
This event is especially useful when we want to confirm performance characteristics suggested by sampling. If a method appears frequently in profiler output, method timing can answer a more precise question: how long does this method actually take when it runs?
Because the data is aggregated, the overhead remains low, and the resulting recordings stay compact. This makes Method Timing suitable for longer recordings and routine performance validation.
The Method Trace event captures per-invocation timing information.
Instead of aggregating data, the JVM records individual events for method executions. Each event represents a single invocation and its execution duration, along with additional execution context as supported by JFR, such as stack traces.
This level of detail is intended for deeper investigation. When a method’s execution time varies significantly depending on call path, input, or runtime state, per-invocation tracing helps identify why some executions are slow while others are fast.
Because tracing records provides more detailed information, it is naturally heavier than aggregated timing and is best used for targeted analysis rather than continuous monitoring.
Together, these two event types reflect the central design of JEP 520: exact measurement when we need certainty, integrated cleanly into JFR, without replacing sampling or existing profiling tools.
Method filters are a central concept in JEP 520. Exact method timing is only useful if it can be applied selectively. Instrumenting every method would generate too much data and unnecessary overhead. Filters ensure that only the methods we care about are timed.
JEP 520 allows us to define filters that tell the JVM which methods should be instrumented and, therefore, which executions should generate JFR events. This keeps overhead controlled while still giving us precise measurements where they matter.
The filtering model is intentionally simple, but expressive enough to cover real-world use cases.
The most precise form of filtering targets individual methods.
A method-level filter identifies a specific method using its declaring class and method name. Only that method is instrumented, and only its executions contribute timing or tracing data.
Conceptually, this looks like:
com.javahandson.service.PaymentService::processPayment
In practice, we can enable tracing or timing for just this method when starting a JFR recording:
$ java -XX:StartFlightRecording:jdk.MethodTrace#filter=com.javahandson.service.PaymentService::processPayment,filename=recording.jfr …
This approach is ideal when we already suspect a particular method and want exact execution timings, without noise from surrounding code.
Class-level filters operate at a broader scope.
Instead of naming a single method, the filter selects a class, and all methods declared in that class become eligible for timing or tracing. This is useful when analyzing an entire component, service, or boundary layer.
Conceptually:
com.javahandson.repository.AccountRepository
All methods in AccountRepository are then included automatically. This gives us a clearer picture of how time is distributed within the class, without having to list each method explicitly.
If we need to time or trace many methods, JEP 520 also allows us to define them in a custom JFR configuration file, rather than passing everything on the command line.
<?xml version="1.0" encoding="UTF-8"?>
<configuration version="2.0">
<event name="jdk.MethodTiming">
<setting name="enabled">true</setting>
<setting name="filter">
com.example.Foo::method1;
com.example.Bar::method2;
com.example.Baz::method17
</setting>
</event>
</configuration>
We can then start recording using this configuration alongside the default settings:
$ java -XX:StartFlightRecording:settings=timing.jfc,settings=default …
This keeps startup commands clean while still allowing precise control.
Annotation-based filtering is one of the most powerful features introduced by JEP 520.
Rather than targeting classes or methods by name, we can filter based on annotations. This aligns naturally with modern Java frameworks, where annotations already define semantics such as controllers, endpoints, initialization logic, or debug paths.
For example, we might define custom annotations like this:
@Retention(RUNTIME)
@Target({ TYPE, METHOD })
public @interface StopWatch {
}
@Retention(RUNTIME)
@Target({ TYPE, METHOD })
public @interface Initialization {
}
@Retention(RUNTIME)
@Target({ TYPE, METHOD })
public @interface Debug {
}
When we need to troubleshoot the application, we can enable timing or tracing based on these annotations:
$ java -XX:StartFlightRecording:method-trace=@com.javahandson.Debug …
$ java -XX:StartFlightRecording:method-timing=@com.javahandson.Initialization,@com.javahandson.StopWatch …
With this approach, any method or class marked with these annotations is included automatically. This works especially well in framework-driven code, where class names may change but annotations represent stable intent.
JEP 520 also defines how certain method forms are handled so that filtering remains predictable.
Constructors (<init>) can be included, allowing us to measure object construction paths.
Static initializers (<clinit>) may also be instrumented, which is useful for analyzing class-loading and startup behavior.
Overloaded methods are handled together. When a method name is selected, all overloads are included, regardless of parameter list.
This avoids accidental gaps where only some variants of a method are measured.
Overall, method filters ensure that JEP 520 remains a targeted and intentional measurement tool. We decide the scope of instrumentation, the JVM performs the timing, and JFR records the results—without code changes or external agents.
Let’s walk through one investigation from start to finish, using the flow JEP 520 is designed for: we start with aggregated timing to gain certainty, and then switch to per-invocation tracing when we need deeper context.
Assume we are working on a service where requests sometimes feel slow. Sampling profiles do not clearly point to a hotspot.
From experience, we form a hypothesis: “Requests feel slow sometimes. I suspect PricingService::calculatePricing But sampling doesn’t give me certainty.”
At this point, the question is very specific:
“What is the actual execution time of this method when it runs?”
Sampling can hint, but it cannot guarantee exact per-method timing, especially for rare but expensive executions. This is exactly the gap JEP 520 is meant to fill.
We begin with Method Timing because our goal is confirmation, not diagnosis.
We start a JFR recording and enable timing for the suspected method using a method filter:
$ java '-XX:StartFlightRecording:jdk.MethodTiming#filter=com.javahandson.service.PricingService::calculatePricing,filename=timing.jfr' -jar app.jar
The application runs under normal load for a while. When enough data is collected, we inspect the results:
$ jfr view method-timing timing.jfr
Because our filter targets only one method, the aggregated output focuses only on that method, for example:
| Timed Method | Invocations | Average Time |
| com.javahandson.service.PricingService.calculatePricing() | 420 | 2.30 ms |
From this aggregated view, we can now answer:
This step gives us proof, not guesswork.
The new clarity we gain is decisive:
“#calculatePricing() is not just ‘feeling slow’. We now have actual timings for it.”
Note: If we broaden the filter to a class or an annotation, then the same view can list timings for multiple methods. (We covered that in the filters section.)
If the aggregated timing confirms that the method is expensive or variable, the next question becomes:
“Which call path triggers the slow invocations?”
Now we enable Method Trace for the same method and capture a short, focused recording:
$ java '-XX:StartFlightRecording:jdk.MethodTrace#filter=com.javahandson.service.PricingService::calculatePricing,filename=trace.jfr' -jar app.jar
Since the method was invoked 420 times, JFR records 420 separate jdk.MethodTrace events — one for each execution.
To inspect these events, including stack traces, we print them:
$ jfr print --events jdk.MethodTrace --stack-depth 20 trace.jfr
The output contains many entries. Below are three representative samples to illustrate what we see.
Fast invocation (common case)
jdk.MethodTrace {
startTime = 20:32:39
duration = 1.95 ms
method = com.javahandson.service.PricingService.calculatePricing()
eventThread = "http-nio-8080-exec-2"
stackTrace = [
com.javahandson.service.PricingService.calculatePricing(...)
com.javahandson.service.OrderService.placeOrder(...)
com.javahandson.api.OrderController.createOrder(...)
org.springframework.web.servlet.DispatcherServlet.doDispatch(...)
]
}
The above represents the normal path. The method executes quickly and follows the expected request flow.
Another fast invocation (different request thread)
jdk.MethodTrace {
startTime = 20:32:41
duration = 2.10 ms
method = com.javahandson.service.PricingService.calculatePricing()
eventThread = "http-nio-8080-exec-5"
stackTrace = [
com.javahandson.service.PricingService.calculatePricing(...)
com.javahandson.service.OrderService.placeOrder(...)
com.javahandson.api.OrderController.createOrder(...)
org.springframework.web.servlet.DispatcherServlet.doDispatch(...)
]
}
Most of the 420 invocations look similar to these first two entries.
Slow invocation (problematic case)
jdk.MethodTrace {
startTime = 20:32:47
duration = 18.40 ms
method = com.javahandson.service.PricingService.calculatePricing()
eventThread = "http-nio-8080-exec-7"
stackTrace = [
com.javahandson.service.PricingService.calculatePricing(...)
com.javahandson.service.PricingFallbackService.calculateFallback(...)
com.javahandson.integration.ExternalRateClient.fetchRates(...)
com.javahandson.api.OrderController.createOrder(...)
org.springframework.web.servlet.DispatcherServlet.doDispatch(...)
]
}
The above trace is different. The duration is much higher, and the call path reveals a fallback and external integration that is not present in the fast executions.
What changes with Method Trace
By looking at multiple trace events, we start seeing patterns instead of isolated data points.
We can now observe that:
This is the key insight that aggregated timing alone cannot provide.
The new clarity becomes actionable: “Only the fallback pricing path is slow. The core pricing logic is fine — the issue lies elsewhere.”
Why this matters
This is the moment where JEP 520 truly shines.
Instead of guessing or adding logging, we now have clear, JVM-provided evidence — one event per invocation — to guide the next fix.
This keeps the investigation focused, efficient, and grounded in facts.
JEP 520 is not a standalone feature. It is part of the ongoing work to make Java Flight Recorder (JFR) easier to trust and more useful for everyday performance analysis. Its real value becomes clear when we see how it works alongside existing JFR capabilities.
First, JEP 520 works together with CPU-time profiling, not against it. CPU-time profiling tells us where threads spend their CPU time. It is very good at answering questions like which methods are keeping the CPU busy. However, it does not tell us how long a method took from start to end if that time includes waiting, I/O, or blocking.
Method timing fills that gap. It measures elapsed time inside a method, regardless of whether the thread was actively using the CPU or waiting for something else. When we use both together, we get a fuller picture: CPU profiling shows where CPU is consumed, and method timing shows how long an operation actually took.
Second, JEP 520 benefits from recent improvements in safer stack sampling inside the JVM. The JVM has been steadily improving how stack traces are collected so that they are more reliable and less disruptive. This matters because method tracing can include stack trace information. When the underlying stack collection is safer and more stable, the context we see in method trace events becomes more trustworthy and suitable even for production investigations.
Most importantly, JEP 520 is not meant to replace sampling-based profiling.
Sampling is still the best tool when we don’t know where the problem is. It is lightweight, can run continuously, and helps us explore large codebases without prior assumptions. Method timing and tracing are different. They assume that we already have a strong suspicion and now need exact answers for a small, specific part of the code.
In simple terms, each tool has a clear role:
Seen this way, JEP 520 makes JFR more complete. It adds precision where approximation is not enough, while still relying on sampling and other JFR events for broad, low-overhead insight.
JEP 520 gives very accurate results, but it’s important to understand that this accuracy comes with some cost.
To measure the exact method execution time, the JVM has to add extra work around a method. When an instrumented method starts, the JVM notes the time. When it finishes, the JVM notes the time again and records the result in JFR. This extra work means the method becomes slightly slower than normal.
Because of this, JEP 520 is not meant to be used everywhere or all the time.
If we enable method timing or tracing for many methods, especially methods that run very frequently, the overhead can add up. Method Trace is even heavier, because it records data for every single execution. Using it broadly can create large recordings and can affect application performance.
That’s why JEP 520 should be used carefully and intentionally.
It works best when we use it:
A practical approach is to first rely on sampling to spot suspicious areas. Once we have a strong guess, enable Method Timing to confirm it. Only if needed, switch to Method Trace to understand why the method is slow. After that, turn it off.
In simple terms: JEP 520 is a precision tool, not an always-on monitor. Use it when we need certainty, then put it away once the job is done.
To keep expectations clear, JEP 520 is intentionally limited in scope. The following points describe what it does not support and what it is not designed to do.
Native methods are not supported – Methods implemented in native code run outside the JVM, so their execution boundaries cannot be reliably instrumented for timing.
Abstract methods are not supported – Abstract methods have no implementation, so there is no executable code to measure.
No method argument capture – JEP 520 does not record method parameters, return values, or field values. It measures execution time only, not data flow.
No object or state inspection – The feature does not track object state, heap values, or changes to fields during method execution.
Risk of recursion for certain JDK internals – Instrumenting low-level JDK or JFR-related methods could lead to recursive event recording. Such methods are intentionally excluded or discouraged.
Not suitable for always-on monitoring – Because method instrumentation adds overhead, JEP 520 is not designed to run continuously across large parts of an application.
Not a full tracing solution – It does not aim to replace distributed tracing, APM tools, or sampling-based profilers.
These limits are deliberate. By clearly defining what JEP 520 does not do, the JEP stays focused on providing precise, reliable method timing without unnecessary complexity or risk.
JEP 520 is most valuable when we reach the limits of approximation. There are times when estimates are good enough, and there are times when we need clear, exact answers. This section helps us decide when JEP 520 fits into our workflow.
JEP 520 is the right tool when we:
In these situations, method timing and tracing help us replace guesswork with certainty.
JEP 520 is not the right tool when we:
In those cases, sampling-based profiling and existing JFR events remain the better choice.
A simple way to think about it is:
Used at the right moment, JEP 520 helps us move from suspicion to certainty with confidence.
JEP 520 does not try to replace profilers or sampling-based tools. Those tools are still essential for understanding large systems and for discovering where performance problems might exist. Sampling remains the best way to get a broad, low-overhead view of application behavior.
What JEP 520 adds is the missing piece.
It completes the Java Flight Recorder toolbox by giving us a way to measure exact method execution time, directly inside the JVM, without modifying code or relying on intrusive tooling. When approximation is no longer enough, and we need clear evidence, JEP 520 provides that certainty.
Instead of guessing based on probabilities, we can now answer precise questions with confidence. We can confirm whether a suspected method is truly slow, understand how often it behaves badly, and, when needed, see the execution context that explains why.
The real value of JEP 520 is not in replacing existing tools, but in working alongside them. Sampling helps us narrow the search. Method timing and tracing help us finish the investigation.
Sampling tells us where to look. JEP 520 tells us what actually happened.