Where the ZIO Roadmap is Headed Next

A guest post by John A. De Goes, Architect of the ZIO open source library

Less than a year into its 1.0 release, ZIO — the open source library that helps you build next-generation reactive applications on JVM — continues to grow rapidly. It’s now up to 1 million downloads a month. In addition, commercial adoption continues to climb, with dozens, even hundreds, of companies choosing to deploy ZIO-based applications in production.

I know of many companies deploying greenfield projects in ZIO and other companies replacing existing solutions with ZIO. Although I’m somewhat biased as one of the architects of ZIO, I believe that it is taking developer joy and productivity to a level never seen before in any open source Scala ecosystem.

But what are the upcoming improvements targeted for ZIO 2.0? And what are the two surprises not discussed at the inaugural ZIO World 2021?

Look for improved performance

ZIO 2.0 is an incremental upgrade, not a rewrite, and it won’t require a rewrite of your code. The original 1.0 design of ZIO is working well and has been proven in production by multiple Fortune 500 companies.

ZIO 2.0 is focused on adding a few missing ingredients and adding polish everywhere. One aspect of polish is improving performance. We don’t want any company to say, “We can’t do functional programming because it’s too slow.” Instead, I want companies to say, “We have to do functional programming because it’s faster than what we’re doing today.”

The runtime system already has a high-speed design that’s been hardened, but I’ve spent even more effort in making sure it’s as fast as it can be. The following benchmarks will give you a feel for how ZIO 2.0 compares with Cats Effect 3.0.

Graph showing collection operator benchmarks between ZIO 2.0 & Cats Effect 3.0. ZIO 2.0 outperforms for collectAll and collectAllParN, Cats Effect 3.0 for collectAllPar.
Graph showing region operator benchmarks between ZIO 2.0 & Cats Effect 3.0. ZIO 2.0 outperforms for Uninterruptible, Ensuring, UninterruptibleMask, and Bracket.
Graph showing interop operator benchmarks between ZIO 2.0 & Cats Effect 3.0. ZIO 2.0 outperforms for Unsafe Run (left) and Unsafe Run (right)
Graph showing iteration operator benchmarks between ZIO 2.0 & Cats Effect 3.0. ZIO 2.0 outperforms for foreach, foreach Fork, and foreach Fork/Await

The operators measured in these benchmarks are among the most complex, especially the region-based ones. Therefore, making these fast and correct is difficult — but the results speak for themselves.

You’ll get a new, fiber-aware scheduler

One of the areas where fiber-based effect systems can innovate is having a scheduler that is fiber-aware. Thanks to the work of Adam Fraser, this is happening in ZIO 2.0. A scheduler sits between all the fibers and determines which fibers will execute on which threads (and for how long). Thus, a scheduler is responsible for taking a million fibers and deciding how to execute them all efficiently on just four threads.

In ZIO 1.0, we used a scheduler built into Java, which is battle-tested but doesn’t take advantage of the inside knowledge we have in the ZIO library about fiber workloads. Building on work done in the Rust open-source community, Adam Fraser has created a scheduler that knows about fibers and optimizes how these fibers are mapped to physical threads.

The results are tremendous. In purely synthetic benchmarks, the ZIO 2.0 scheduler is way faster than the ZIO 1.0 scheduler, and it’s also quicker than the Cats Effect 3 scheduler.

Graph showing scheduler benchmarks between ZIO 2.0 & Cats Effect 3.0. ZIO 2.0 outperforms for Chained Fork, For Many, and Yield Many. They are neck and neck for Ping Pong.

Look for simpler layers

ZIO’s layers are recipes for building parts of an application’s dependency graph (like constructors, but more compositional and able to use resources). Thus, they can provide different parts of an application the context they need and make it easier to code interfaces rather than implementations (thereby improving testability).

Layers have confused people new to ZIO, and we’ve known for a while that we needed to improve the experience. In ZIO 2.0, we’ve spent a lot of effort figuring out how to make layers simpler.

Layers facilitate testability, and there were several problems with layers in ZIO 1.0. Therefore, we encourage people to use this module pattern, especially developers coming from Java.

The first thing that we’ve simplified is the module pattern. The new module pattern is instantly familiar to Java programmers, as it’s just OOP!

    trait UserRepository {
  def getUserById(id: Id): Task[User]

final case class UserRepositoryImpl(database: Database) 
    extends UserRepository {
  def getUserById(id: Id): Task[User] = ...

object UserRepository {
  val live = UserRepositoryImpl(_).toLayer

In addition, we’re simplifying the construction of more complicated layers by deleting nearly all the layer constructors and instead adopting a uniform and straightforward pattern that works for constructing layers of any complexity.

    val complexLayer = 
  (for {
    ref      <- Ref.make(state)
    database <- ZIO.service[Database]
    logging  <- ZIO.service[Logging]
    config   <- ZIO.service[Config]
  } yield MyService(ref, database, logging, config)).toLayer
    val managedLayer = 
  (for {
    ref      <- Ref.make(state).toManaged_
    database <- ZManaged.service[Database]
    logging  <- ZManaged.service[Logging]
    config   <- ZManaged.service[Config]
    service  <- MyService(ref, database, logging, config).toManaged_
    _        <- service.initialize.toManaged_
    _        <- ZManaged.finalizer(service.destroy)
  } yield service).toLayer

Kit Langton has also brought a new feature for automatically constructing layers from their pieces, with highly detailed error messages. Since building complex layers (and encountering cryptic error messages) was one of the frequent complaints with ZIO 1.0 layers, we expect his work to improve developer experience in 2.0 dramatically.

    val layer = 
  ZLayer.wire[Flour with Console](
    Console.live, Flour.live, Spoon.live)

With these changes, a developer new to ZIO 2.0 can be successful using layers immediately, without any training. Even though not every ZIO 1.0 developer uses layers, I think after 2.0, 90% of developers will be happily using layers.

ZIO 2.0 will have improved ergonomics

A key part of polish in ZIO 2.0 is improved ergonomics — simplifications. One way to simplify is to delete multiple ways of doing the same thing. Another way is to make sure the names of operators reflect their purpose in each domain. An example of the former is that in ZIO 1.0, you call ZIO.effectTotal or ZIO.succeed, and they do the same thing. Why have both? In ZIO 2.0, we will delete ZIO.effectTotal. As another example, ZIO.effect is not descriptive. It’s precise in a functional programming sense because it says we’re converting a side-effect into a functional effect.

More practically, however, when a developer uses this constructor instead of ZIO.succeed, it’s because they know there’s a possibility it will throw an exception. Therefore, they want to attempt something that might fail. So in ZIO 2.0, we will call this constructor ZIO.attempt.

There are other simplifications planned for ZIO 2.0. Not very many, but where relevant, we want to delete redundant operators, provide better names, and become more opinionated.

You’ll receive out-of-the-box profiling

A favorite feature coming to ZIO 2.0 is out-of-the-box profiling. Although we have great tools such as YourKit and JProfiler, if you try one of these tools on a ZIO application, you end up seeing a whole lot of the ZIO runtime system and very little of your own code. They can’t help you optimize asynchronous applications.

Thanks to the work of Maxim Schuwalow and Dejan Mijic, ZIO is getting a built-in profiler, which is simple to set up and writes data to a file. The profiling method used is causal profiling. This is a relatively new technique for profiling applications that works well for async code, and it works very well for ZIO.

Causal profiling measures the performance cost of each line of code by slowing down the other lines of code. Since ZIO’s execution tracing provides line number information

Two charts tracing Program Speedup (vertically) and Line Speedup (horizontally) for causal profiling for fluidanimate in ZIO. In the first chart for Line 151 the line goes from 0% to -10% to -5%.  In the second chart for Line 184 the line goes from 0% to -10% to -20%

The profiler will help you find hotspots in any ZIO application.

Pinpoint where your application failed — and why

ZIO was the first effect system to support optional (opt-out) execution tracing, which provides detailed diagnostics for async applications when an error occurs. With applications built on Scala Future, stack traces are not helpful because they show the internal machinery of Future — and not your application code. ZIO’s execution tracing gives you async stack traces that help you pinpoint where your application failed and why.

This feature was a game-changer, but what we wanted to do for ZIO 2.0 was to make it faster and make the traces cleaner. Rob Walsh and others have put a lot of time and effort into revamping execution tracing. Using metaprogramming in Scala 3 and macros in Scala 2, ZIO 2.0 will keep track of which methods call into ZIO at compile-time — with virtually no runtime impact on performance.

Examples of benchmarks for execution tracing in ZIO 2.0

Since execution tracing in ZIO 1.0 halved performance in purely synthetic benchmarks, this means ZIO 2.0 is effectively doubling raw performance across the board. But, of course, this doesn’t mean your ZIO application will run 2x faster because these benchmarks are purely synthetic, and real-world applications do many other things that take up real-time.

This increased level of performance will only be available to code that uses ZIO directly. However, if you’re doing tagless-final, we will have an emulation mode that preserves execution traces and the slower performance that tagless-final entails.

We are also working on condensing and summarizing execution tracing results (which tend to belong, noisy, and difficult to digest), so when you see a failure in the log, you know exactly what went wrong and where to go to fix it.

Receive built-in metrics

ZIO has always been the effect system that stays with you after you compile the code and ship it to production, helping you track down failures and fix them.

That’s why ZIO 1.0 is the only effect system with lossless errors and fiber dumps and the first effect system with execution tracing. That’s also why ZIO 2.0 has causal profiling. With ZIO 2.0, we’re also introducing built-in metrics. Without adding any code, you will get the following real-time metrics from your ZIO application:

  • Fiber Lifetimes - How long are fibers living?
  • Fiber Failures - When fibers die, how do they die? Which are the most frequent causes of death? Which causes of death are typed errors versus defects? What percentage of fibers are interrupted before they complete?

ZIO 2.0 makes it trivial for you to add your application metrics anywhere, including counters, gauges, and histograms.

ZIO 2.0 metrics will have out-of-the-box support for Datadog, StatsD, and Prometheus, with New Relic on the way.

Screenshot of the DataDog dashboard showing various ZIO ZMX metrics.

Zhub: A new broadcast-based concurrent structure

ZIO has always had strong support for concurrent structures, including a semaphore, a promise, and so forth. But, unique in the world of Scala effect systems, ZIO also has its own super-fast, low-level ring buffer, which forms the heart of ZIO Queue.

ZIO Queue is 100% asynchronous and interruption-friendly. Moreover, it’s fast. How fast? If you compare ZIO Queue with the highly optimized FS2 Queue or the Cats Effect 3 queue, it blows them away in sequential and parallel workloads.

Graph showing queue benchmarks between ZIO 2.0 & Cats Effect 3.0. ZIO 2.0 outperforms for both parallel and sequential queues.

Although fast, ZIO Queue is not the best solution to all problems. For example, ZIO Queue is not an excellent solution for broadcasting scenarios (fan-out), which frequently occur in web and data applications. You can do broadcasting with ZIO Queue, but you need to create one ZIO Queue for each subscriber, and then every producer must write the same message into each queue.

Adam Fraser implemented a broadcast-based concurrent structure built on a ring buffer. It’s called ZHub, and it ships with lots of different strategies for dealing with slow consumers. Hub is dramatically faster than everything else out there for broadcast scenarios.

Graph showing hub benchmarks between ZIO hub, ZIO queue, FS2 topic & Cats Effect 3.0 Queue. ZIO hub and queue outperform for both parallel and sequential queues.

We want ZIO to be your one-stop shop for all the high-performance async structures. With ZIO 2.0, we’re better at achieving that goal than ever before.

ZChannel: A new foundation for ZIO streams

Table: ZChannel: A new foundation for ZIO streams

One of the most significant improvements coming to ZIO 2.0 is a new foundation for ZIO Streams. ZIO Streams is mature, but for some time now, we have been looking for a different basis — one that would be compositional and complete (in the sense of solving the full range of streaming problems without duplication or boilerplate).

Over the past year, we experimented with four or five different formulations of streams, but none of them had all the properties we were looking for. I have worked with Itamar Ravid, Dejan Mijjic, and Daniel Vigovsky, and Adam Fraser to develop what we believe is the future of streaming: a new basis called ZChannel.

A ZChannel is like an ordinary channel in Java in that you can perform read and write operations on a single channel. Channels read from upstream and write to downstream: they are like channels of information.

Channels are very rich and can be created, transformed, and composed in many ways. Fundamentally, though, channels are very low-level, and there is an imperative feel to using them.

From a channel, however, you can easily create both streams, sinks, and transducers. So, streams, sinks, and transducers can all be viewed as aspects of a unified whole. Everything can be done with channels. There is beauty and elegance to this encoding, but it’s not just theoretical. When everything is a channel, you can build a complete pipeline from streams, transform the elements with transducers, and aggregate the values (or dump them somewhere) using sinks. Ultimately it is all just channel composition.

When executing channels, we can do optimizations, such as channel fusion, and others which are simply not possible when streams, transducers, and sinks have different structures.

ZIO Streams 2.0 it’s not going to look much different. It’s going to be the same API you know and love, but the new basis brings precise semantics and significantly higher performance (double the performance of Akka Streams on one benchmark and more than double FS2 3.0).

Diagram showing stream performance with ZIO Streams showing double the performance of Akka Streams and more than double of FS2 3.0.

We’re bringing aspects into ZIO 2.0 as a first-class construct

    trait ZIOAspect[-R, +E] {
  def apply[R1 <: R, E1 >: E, A](zio: ZIO[R1, E1, A]): ZIO[R1, E1, A]

sealed trait ZIO[-R, +E, +A] {
  def @@ [R1 <: R, E1 >: E](aspect: ZIOAspect[R1, E1]): ZIO[R1, E1, A] = aspect(this)

ZIO Test has a concept of “aspects” — polymorphic functions that accept and return specs. They allow you to time out tests, run them in parallel, access environment variables, run tests only on the JVM, and so forth. Aspects are enormously powerful and very convenient.

A ZIO Test aspect transforms one spec to another spec, but aspects appear everywhere. Aspects appear in Caliban. They appear in ZIO Query. They appear in resilienze, a community ZIO library. Once you start seeing them, you can see them everywhere.

Fundamentally, an aspect is just a polymorphic function that allows you to modify the error, environment, or success type of some functional effect in a well-defined way.

Thanks to the work of Adam Fraser and me, we’re going to bring aspects into ZIO 2.0 as a first-class construct so that different library authors don’t have to re-invent the same thing.

ZIO 2.0 will ship with many different aspects, possibly including timeout, retry, repeat, bulkhead, rate-limiting, and other patterns. Third-party libraries can build their aspects that compose seamlessly with ZIO’s aspects.

    object ZIOAspect {
  def timeoutFail(d: Duration) = ...
  val bulkhead = ...
  val trace    = ...
  def rateLimit(n: Int) = ...
import ZIOAspect._

myEffect1 @@ timeoutFail(60.seconds) @@ trace @@ rateLimit(200)

val allAspects = ZIOAspect.allOf(timeoutFail(60.seconds), trace, rateLimit(200))

myEffect2 @@ allAspects

Support for application state management

In other news, we will be bringing support for application state management directly to ZIO. Some people have wondered if ZIO will ever get more types of parameters. It already has three, but what about another one for application state?

It turns out that ZIO never needs more than three type parameters. The reason is these three parameters (context, failure, and success) are as powerful as you can imagine, easily powerful enough to facilitate state management.

This support will only be a few lines of code, and it will give you the full power of the so-called “state monad” or “state monad transformer”, but without any of the drawbacks (it will be fast, concurrent-safe, and memory-sparing).

    final case class State[S](ref: FiberRef[S])

final case class MyState(counter: Int)

for {
  _     <- ZIO.updateState[MyState](_.copy(counter = state.counter + 1))
  count <- ZIO.getStateWith[MyState](_.counter)
} yield count

As with everything in ZIO, this will be compositional, so you can take effects that deal with a “small” state and embed them into effects that deal with a “large” state.

…And beyond ZIO 2.0

ZIO 2.0 is shaping up to be an incredible release — but that’s not all. Many of the exciting things are happening not in ZIO proper but in the larger ZIO ecosystem.

A newibrary: ZIO HTTP and microservices

Functional Scala developers have been using http4s for some time, and while this is a perfectly reasonable choice for a web backend for a ZIO application, more options are always better.

Now, ZIO developers can choose the new library, ZIO HTTP. Significantly faster than http4s and even quicker than Vert.x in some benchmarks, ZIO HTTP is fully ZIO native and has no higher-kinded, type class, or monad transformer boilerplate common to http4s.

There’s also ZIO TLS HTTP, which is also very fast and ZIO native. ZIO Web is in active development (but not yet ready for use). Tapir is production-ready and has excellent support for ZIO. Finally, Izumi has a whole microservice framework that works beautifully with ZIO.

Finally, functional Scala developers can pick the right tool for the right job.

Improved persistence

For persistence, ZIO users have long been able to use Doobie, and more recently, Skunk. But now, ZIO Quill feels more like a native ZIO library because it takes advantage of the ZIO environment.

ZIO SQL is fully operational right now and not too far from a first release.

Support for the full range of redis operations

ZIO Kafka is extensively used in production. Daniel Vigovsky has done a fantastic job with ZIO AWS, which packages the full AWS API in a ZIO native package. ZIO DynamoDB is in active development and features auto-batching and auto-parallelization.

ZIO Redis will be the best Redis client on the JVM (not just in the Scala ecosystem). Fast, with support for the full range of Redis operations in transactional and non-transactional mode, ZIO Redis also supports custom codecs and ships with a test Redis.

Uncompromising libraries

Many of the ZIO libraries I’ve mentioned (and others) go beyond the “fabric” that ZIO provides, reaching up to concrete problems in specific domains. As they do this, they will have a more significant impact on the broader industry.

Next-generation functional Scala libraries are going to be pragmatic. They’re going to be focused on ergonomics and usability. Not the academic stuff (which still has its place).

New libraries are aggressive. They tackle messy, real-world problems without fear. They’re not afraid to use familiar names to throw away type classes, implicits, and type-level programming. They’re not afraid to make compromises and to bring something that’s accessible, user-friendly, and easy to teach--but still principled and still built on the beauty of compositional, statically-typed functional programming.

I believe that this rapidly emerging functional Scala ecosystem will do wonders for commercial functional programming.

From my perspective, it’s inspiring to see functional programming start to become mainstream in the Scala community. But it can’t stay where it is today. There are just too many good libraries. Too much education. Too many great teams. Functional Scala simply cannot be contained.

It’s a revolution in functional programming. I’m very excited and honored to be a part of this change and help others play their role. If you’re in the market for a functional Scala job, I’d recommend talking to the folks at Capital One. They know what they’re doing and have an incredible passion for using functional Scala to achieve significant competitive advantages. My own company, Ziverge, also hires for functional Scala positions.

There’s never been a better time to be a functional Scala developer!

John A. De Goes, Architect of the ZIO open source library

A mathematician by training but a software engineer by vocation, John A. De Goes has been professionally writing software for more than 25 years. John has contributed to dozens of open source projects written in functional programming languages, including ZIO, a library for asynchronous and concurrent programming in functional Scala. In addition to speaking at Strata, OSCON, BigData TechCon, NEScala, ScalaWorld, Scala IO, flatMap, Scalar Conf, LambdaConf, and many other conferences, John also published a variety of books on programming. Currently, John heads Ziverge Inc, a company committed to solving hard business problems using the power of functional programming.

Yes, We’re Open Source!

Learn more about how we make open source work in our highly regulated industry.

Learn More

Related Content