Virtual Thread Is to Shake the World
virtual thread
when sould you use virtual threads
Virtual threads (introduced in Project Loom, JDK 21 GA) are lightweight threads managed by the JVM rather than the OS. They shine in scenarios where you need to handle lots of concurrent tasks that spend most of their time waiting (IO-bound workloads).
Good use cases:
Servers handling many concurrent connections
e.g., HTTP servers, gRPC servers, WebSocket servers.High-concurrency clients
e.g., calling many downstream services (DB, REST APIs, message queues).Asynchronous pipelines
where tasks wait on external systems but you want code to look synchronous.Replacement for callback-heavy async code
You can write blocking-style code, but still scale like async/reactive.
Not ideal:
CPU-bound parallel computations
Use platform threads (classic ForkJoinPool, parallel streams, or structured concurrency with platform threads). Virtual threads don’t give extra CPU, they just multiplex blocking tasks.Very short-lived tasks in huge numbers
If the task just increments a counter, spawning millions of VTs gives no benefit and can be slower than batching work on a platform thread pool.
does a “virtual thread pool” exist?
Strictly speaking: NO, virtual threads don’t use a traditional pool like Executors.newFixedThreadPool.
Instead, each virtual threads is cheap to create (thousands to millions possible). The JVM schedules them onto a small pool of carrier threads (platform threads) behind the scenes.
That means:
- You can just create anew virtual thread per task (
Thread.ofVirtual().start(...)orExecutors.newVirtualThreadPerTaskExecutor()). - No need to reuse them —— they are disposable, unlike platform threads.
- The JVM maintains an internal scheduler that runs VTs on carrier threads.
So the pattern is:
1 | try (var executor = Executors.newVirtualThreadPerTaskExecutor()) { |
Here, each task gets its own virtual thread. The JVM multiplexes them efficiently over a small number of OS threads.
How to choose between pool of platform threads vs per-task virtual threads
- User platform thread pools when:
- You want to bound concurrency for CPU-bound tasks (e.g., 8 threads for 8 CPU cores).
- User virtual threads when:
- You want to scale IO-bound concurrency (e.g., 50k socket connections).
- Thread creation overhead is a bottleneck.
VTs or Reactor
Reactor vs Virtual threads: different approaches
Reactor
- Based on the reactive streams model (non-blocking, event-loop style).
- Forces you to compose async flows (Mono, Flux) to avoid blocking threads.
- Good for massive concurrency and backpressure control, but requires a different programming model.
Virtual threads
- Keep the blocking code style (imperative, linear).
- You can just call
db.query()orhttpClient.send()inside a virtual thread without worring about blocking the OS thread. - The scheduler multiplexes thousands of VTs on a few carriers.
In case you have some ideas like
1 | Mono.fromCallable(() -> { |
, you could question: why Reactor at all for IO-heavy workloads?
With VTs, you don’t need reactive chains to achieve scalability ——— a simple synchronous style in a VT can scale to tens of thousands of connections.
Reactor is still useful if:
- You need backpressure.
- You already have a large reactive codebase.
- You want integration with libraries that are natively reactive (R2DBC, reactive Kafka, etc.).
Hybrid using is fine, you can still integrates with existing Reactor ecosystem.
1 | Scheduler vtScheduler = Scheduler.fromExecutor(Executors.newVirtualThreadPerTaskExecutor()); |
it’s a Reactor way you handle IO-heavy request,
1 | Scheduler vtScheduler = Scheduler.fromExecutor( |
what does pure virtual threads looks like in IO-heavy tasks?
1 | ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); |
or with structured concurrency:
1 | String service(String id) throws Exception { |
that’s it, the blocking style comes back without loose high throughput.
If you’re on Spring 6 / Spring boot 3 with Loom (JDK 21), you can drop WebFlux and use Spring MVC with virtual threads instead.
Each request gets its own VT, so blocking is cheap:
1 |
|
this is the “Loom Future”: MVC simplicity + WebFlux scalability.
VTs or Netty
Virtual threads can replace the reason Netty exists
Netty was designed because the thread-per-connection model didn’t scale (too many OS threads).
With VTs, you can go back to simple thread-per-connection model:
1 | try (var listener = AsynchronousServerSocketChannel.open()) { |
Each connection just runs on its own virtual thread ——— no event loop gymnastics.
this eliminates the core scalability problem Netty solved in 2004.
Can VTs replace Netty?
Not now, Nety is not “just” about scaling connections. It also provides:
- Protocol implementations (HTTP, HTTP/2, HTTP/3, WebSockets, gRPC transport).
- Pipelines (handlers, encoders/decoders, SSL/TLS, compression).
- Zero-copy, pooled byte buffers for performance.
- Backpressure and flow control.
Virtual threads don’t give you any of that ——— they just let you block without guilt.
What’s a minimal blocking HTTP server with VTs looks like?
1 | import java.io.*; |
If you want non-blocking channels but still keep the simple style:
1 | import java.net.*; |
VTs frameworks
Helidon Níma (Helidon 4)
Reactor Core (Spring Reactor / WebFlux)
Spring Boot 3.2
Caveats, Limitations & Things to Watch
- Blocking calls in third-party libraries: If a library internally blocks in a way the JVM can’t detect (e.g. native sleep, synchronized locks), the benefit is diminished.
- ThreadLocal usage: Because you may have many virtual threads, ThreadLocal data may become an issue (memory, leakage). Scoped values or other strategies are recommended.
- Resource limits beyond threads: Database connections, file descriptors, sockets, memory — you still need to manage these.
- Performance & tuning: The new runtime behaviors may reveal new performance bottlenecks, GC overheads, scheduling latency, etc.
- Maturity & ecosystem: Many frameworks, tools, APM agents, debuggers, etc., will need adaptation to fully embrace Loom.
ThreadLocals can blow up memory usage with millions of virtual threads.
Prefer Scoped Values (available in preview) or explicit context passing.
1 | ScopedValue<String> USER = ScopedValue.newInstance(); |
Structured Concurrency (for Task Groups), Instead of manual CompletableFuture composition, use Loom’s structured concurrency API:
1 | try (var scope = new StructuredTaskScope.ShutdownOnFailure()) { |
🗓 Project Loom Timeline / Roadmap
🔹 Early Days
2017: Project Loom was announced at OpenJDK (Brian Goetz & Ron Pressler leading).
2018–2020: Early prototypes with fibers and continuations. APIs unstable.
JDK 13–16: Some incubator APIs for continuations and structured concurrency experiments.
🔹 Key Milestones
JDK 19 (Sept 2022)
Virtual Threads introduced as a preview feature.
Structured Concurrency incubated.
JDK 20 (Mar 2023)
Virtual Threads previewed again with refinements.
Structured Concurrency incubator updated.
JDK 21 (Sept 2023, LTS release)
Virtual Threads finalized (no longer preview).
Structured Concurrency still incubating.
Scoped Values introduced in preview.
JDK 22 (Mar 2024)
Structured Concurrency (2nd incubator round).
Scoped Values (2nd preview).
JDK 23 (Sept 2024, not LTS)
Ongoing improvements in Structured Concurrency & Scoped Values.
Tooling support (debuggers, profilers) improving for virtual threads.
🔮 Near Future
JDK 25 (likely next LTS in 2026)
Expect Structured Concurrency and Scoped Values to become stable.
Virtual thread ecosystem maturity (Spring, Hibernate, Netty alternatives adapting).
JVM / JIT / GC optimizations specific to Loom workloads.