Building Next Gen Applications on JVM with ZIO

A guest post by John A. De Goes, Architect of the ZIO open source library

If you work in Scala, you likely know ZIO, an open source library that helps you build next-generation reactive applications on JVM. When programmers use ZIO, they often become more productive: They solve problems with less effort, lower their maintenance cost, and have stronger compile-time guarantees.

ZIO is for building complex applications. It’s also for building applications that work correctly — and maintaining applications over time.

ZIO is composed of the following four core pieces:

  1. Core -The core ZIO has data types that help you build scalable, concurrent, non-blocking, resilient applications that never leak resources.
  2. STM - The STM module helps you build non-blocking concurrent structures with transactional guarantees, without deadlocks or race conditions.
  3. Streams - The stream module helps you build high-performant, concurrent streaming applications with automatic back-pressuring and resource safety.
  4. Test - ZIO Test (the testkit built into ZIO) helps you write fast, deterministic tests that do not require interacting with external systems.

Skirt the 10k limit with ZIO

The JVM exposes a construct called a thread, which lets you perform multi-threaded programming. Multi-threaded programming allows programmers to get more work done in a shorter amount of time. It's how programmers can reduce application latency and increase throughput. With multithreading, you also can reduce the time it takes to process requests and speed up analytics and machine learning.

You accomplish these tasks by harnessing all the computational resources on your computer via threads. Early on, there was a decision to map JVM threads to operating system threads — this decision has had profound implications for the evolution of JVM programming languages and libraries.

One implication: JVM-based applications have run into scalability problems because threads don’t scale very well. Threads are heavyweight, bulky structures — if you have too many of them, they slow down your application.

At least four different factors cause this overhead:

  • Stack - Every thread in your application has a pre-allocated stack that consumes a certain amount of memory. It is allocated when you create the thread, it is pre-allocated, and it consumes memory even if the thread is using the memory.
  • Manual Lifetimes - Threads are explicitly started, and there’s no safe way to stop them, so they run to completion unless unique logic is baked into them. Since people don’t usually write this logic, threads do a lot of unnecessary work (see also: the problem of un-cancelable futures in Scala).
  • Garbage Collection - Every JVM thread becomes a new root for garbage collection analysis. As the JVM determines what memory is unused, it must look at all the references held by all the live threads — which slows your app down.
  • Context Switching - The more threads you have running, the more time your computer spends context switching. This is when your hardware saves the state of one thread, switches to another thread, and restores the state of that thread. (This happens because you always have more operating system-level threads than CPU cores.)

Thanks to these limitations, there is a barrier, the so-called 10k limit. The 10k limit is roughly the maximum number of threads that you can have in a modern application before the threading overhead starts to dominate runtime.

10k is not a very high number — if you’re serving millions of users, you have countless connections, most of which aren’t doing anything. But if each one is running on a single thread, it’s an enormous amount of overhead.

What ZIO does is give you a different story for scalability. Instead of a thread for concurrency and parallelism, ZIO gives you a similar construct called a fiber. A ZIO fiber is essentially a virtual thread. Eventually, Project Loom will bring native virtual threads to the JVM, but that will take years — and it will take even longer for the libraries and companies to upgrade. So until this happens, we have ZIO fibers.

ZIO fibers are lightweight threads implemented at the library level. Whereas every JVM thread maps to one operating system thread, ZIO will run hundreds or thousands of fibers on a single JVM thread, achieving massive scalability improvements.

Fibers have compelling advantages:

  • Automatic Garbage Collection - Fibers that are not doing active work (suspended fibers) — and which can never do active work because no one can wake them up again — are automatically garbage-collected. Most times, you don't have to worry about shutting down fibers manually because they will be garbage-collected when they can’t be re-activated (and all their memory reclaimed).
  • Dynamic Stacks - Unlike JVM threads that have pre-allocated stacks, fibers have dynamic stacks that start small. These stacks grow as needed and shrink when possible. This results in significantly more efficient memory utilization, as well as the elimination of many stack-overflow errors.
  • No GC Roots - Fibers do not add new garbage collection roots, so they simplify the garbage collection process.
  • Less Context Switching - When a fiber is not doing active work, it yields control to the thread executing the fiber, which is free to switch over to another fiber. This results in less pre-emption, which improves efficiency by reducing context switching.

Because of these advantages, you can easily have a ZIO application with a hundred thousand or even a million concurrent fibers. In addition, they give you the ability to scale to levels not possible with JVM-level threads.

ZIO was the first library to support fibers in the Scala programming language and has the most comprehensive and most powerful support for fibers.

No more callbacks

    // Legacy callbacks:
s3Get(key, 
  error => log(error),
  value => s3Put(key, enrichProfile(value),
    error => log(error),
    _ => ())	// Asynchronous but with semantic blocking:
val enrich = 
  for {
    value <- s3Get(key)
    _     <- s3Put(key, enrichProfile(value)
  } yield ()
  

With ZIO, you never have to deal with callbacks. However, many libraries (such as Netty) expose some form of callbacks, sometimes under the guise of a “listener.” Callbacks and listeners are very difficult to work for a couple of reasons:

  • Nesting - If, inside of a callback or listener, you must invoke another callback- or listener-based API, then propagating information to the top-level quickly becomes tedious and error-prone.
  • Incompleteness - Callback and listener APIs lose many capabilities of the host language. For example, you cannot use try/finally or catch exceptions. In addition, the ordinary machinery of synchronous programming breaks down with callback and listener APIs.

ZIO lets you take legacy callback and listener APIs and wrap them to look like they are blocking, even though they are 100 percent asynchronous. ZIO also gives you the operators necessary to recapture everything you lose with callback and listener APIs. For example, ZIO gives you an operator called ensuring that it has the same power as try/finally, only it works with both synchronous and asynchronous code. Similarly, ZIO gives you the ability to catch exceptions uniformly — regardless of if they are synchronous or asynchronous exceptions.

ZIO gives you all the tools you know and love from your host programming language — but in a way that works seamlessly with asynchronous code. You never have to see or use a callback or listener API again.

Maximize parallelism in your application

    // Trivially parallelize with precise control:
ZIO.foreachParN(20)(urls) { url =>
  for {
    data        <- load(url)
    json        <- parseToJson(data)
    transformed <- transform(json)
  } yield transformed
}
  

ZIO gives you potent tools to maximize parallelism in your application. While parallelism is baked into the web framework you're using, if you're writing more specialized code — or if you have ultra-low latency requirements, then it makes sense to roll your own parallelism.

ZIO’s parallelism can scale beyond the number of cores in your machine because of non-blocking code. You can initiate hundreds of concurrent HTTP requests “in parallel” because the actual IO traffic may be done by NIO (non-blocking IO on the JVM).

Every single operator in ZIO that can have a parallel version does have a parallel version. So not only do you get parallel operators, but you get parallel operators with concurrency limits.

ZIO gives you foreach to iterate over a collection and perform effects for everything inside that list, collecting the results. Because this operation can support a parallel version, ZIO also gives you the parallel variant called foreachPar.

ZIO also gives you foreachParN to cap the concurrency level. If you have a list with a million things in it, you shouldn’t be trying to do all one million things in parallel — you should be using the capped version instead.

Parallel code written with ZIO is automatically efficient and safe. It can be canceled and timed out without any effort or resource leaks.

Enable ordinary programmers

    // Commit conditional transactions without locks or condition
// variables, free of race conditions and deadlocks:
def acquireConnection = 
  STM.atomically {
    for {
      connection <- available.get.collect { 
                      case head :: Nil
                    }
      _          <- available.update(_.drop(1))
      _          <- used.update(connection :: _)
    } yield connection
  }
  

ZIO ships with high-performant, non-blocking concurrent data structures that are the building blocks of scalable concurrent applications. Beyond these fast data structures, ZIO ships with a next-generation concurrency model powered by STM (Software Transactional Memory).

STM gives you compositional and type-safe transactions on data stored in memory on a single node. These transactions can wait until preconditions are satisfied before committing, which subsumes the use cases for condition variables (and promises/deferreds).

STM is superior to locks and condition variables (such as the machinery you find in java.util.concurrent) in terms of safety. Locks don’t compose, which means if you have two locks guarding access to two different pieces of mutable state, you can’t just acquire both locks to mutate both simultaneously. You’ll run into deadlocks.

This means that concurrent code that uses locks and condition variables isn’t prone to deadlocks but is still hard to change. If you suddenly decide you need to change two things at once in lock-based code, you need to re-engineer everything.

With STM, you can just compose smaller transactions into more significant transactions, with well-defined semantics free from deadlocks. The STM model is also simpler because to wait for something to happen, you just use a simple STM operator. You don’t need to create and wait on separate condition variables.

STM enables ordinary programmers to solve all the concurrent problems they run into in a way that’s easy to maintain as requirements change.

Eliminate resource leaks

    // Package up acquire & release into a Managed resource, with no
// possibility of leaks:
val managedFile = Managed.make(open(file))(close(_))

managedFile.use { resource =>
  (for {
    data <- read(resource)
    _    <- aggregateData(data)
  } yield ()).forever
}
  

ZIO provides powerful abilities that can eliminate resource leaks. The most compelling example of this is the managed data type, which lets you describe how to acquire and release a resource. You create one of these by specifying an effect that acquires the resource and a function that can release it (if given the resource).

Once you have a managed value, you can combine it with other managed values or transform it into another managed value, using all the powerful operations that ZIO gives you.

When you are using managed, you are guaranteed that you will never leak resources. You can open files, acquire connections from a database pool, you can acquire remote resources from a server, and so forth, with no leaks.

Without ZIO, you can't achieve these guarantees in modern applications because they are a hybrid of sync and async code, with listeners, callback-based APIs, exceptions, return values — so much complexity that it guarantees resource leaks.

Globally efficient, automatically

    // Race two effects, canceling the loser:
val geoLookup = geoIpService.lookup(ipAddress)
val dbLookup  = userRepo.getProfile(userId).map(_.location.toLatLong)

val fastest = geoLookup.race(dbLookup)\
  

ZIO is globally efficient and automatic, unlike Scala’s Future, which means that it only ever performs the minimum amount of computation necessary, even in the presence of timeouts, cancelation, concurrent races, and parallel failures.

When you're done with a Future, and you don't need it anymore, it will continue to run in the background until it’s finished — it could be two hours or two days later, leading to thread and memory leaks that slows down application performance. By contrast, ZIO’s global efficiency manifests itself in several ways:

  • Racing - When you race two effects to get back the first success, the loser of the race will be canceled in a resource-safe way.
  • Timeouts - If a timeout elapses and the executing effect hasn’t finished yet, then the execution will be canceled instantly in a resource-safe way.
  • Parallelism - If you are trying to execute multiple things in parallel, and one fails, then the other ones will be canceled instantly in a resource-safe way.

In theory, you could achieve global efficiency by writing lots of code by hand. However, you would have to pass cancellation signals around and constantly check them to see if it’s time to short circuit and abandon the current computation. Practically speaking, no one would ever attempt this.

ZIO helps you to achieve global application efficiency automatically — without writing code. This can make a real difference for some applications. For example, in large microservice-oriented applications where one incoming request can spawn a hundred others, ZIO can automatically ensure that the entire dependency graph is canceled and all resources safely released if a computation is not needed.

If you compare ZIO-based applications with Future-based applications, this can lead to measurable impact on throughput with realistic workloads — so significant that most large-scale Scala applications end up inventing their own ad hoc cancellation framework.

Operate nn infinite streams of data

    // Trivially construct complex transformation pipelines with
// concurrency, resources, & more, which operate on infinite data 
// in constant memory:
val wordCount =
  ZStream.fromInputStream(Files.newInputStream(path))
    .transduce(ZSink.utf8Decode)   
    .transduce(ZSink.splitWords)
    .run(ZSink.count)
  

ZIO has powerful concurrent streams that allow you to operate on infinite data streams in a declarative fashion without leaking resources. ZIO Streams give you the capabilities that any streaming library needs: a producer of values, which ZIO calls a Stream, and a consumer of values, which ZIO calls a Sink. These two abstractions are duals (in the category theory sense of the word). You connect a stream to a sink to form a full pipeline, which can then be executed.

ZIO gives you the tools you need to solve all problems in the class of concurrent, resource-safe, streaming applications in a package that’s small enough to master quickly.

Achieves high testability

    // Write fast, deterministic unit tests on any program,
// even interactive, non-deterministic ones:
val program =
  for {
    _    <- putStrLn("What is your name?")
    name <- getStrLn
    _    <- putStrLn(s"I will wait ${name.length} seconds, ${name}")
    _    <- clock.sleep(name.length.seconds)
  } yield ()

val deterministicResult = program.provideLayer(testServices)
  

ZIO achieves high testability using the ZIO Environment, making it easy for developers to follow object-oriented best practices, like coding to an interface, not an implementation. In addition, the ZIO Environment lets you make fast-running, deterministic tests, even if your code is interacting with consoles or databases or system time.\

The ZIO Test module that ships with ZIO provides a means of ensuring that the modern async concurrent resource-safe code you're building is a hundred percent ready to go into production by satisfying the unit tests you've written.

Build resilient apps

    // Guided by types, build resilient apps:
val retryPolicy = 
  (Schedule.exponential(10.millis)
    .whileOutput(_ < 1.second) andThen 
      Schedule.spaced(60.seconds)) && 
  Schedule.recurs(100)

val result = callFlakyApi(request).retry(retryPolicy)	
// Let the compiler tell you what can fail, and why:
val infallible = 
  result.catchAll(_ => fallback)
  

ZIO helps you build resilient applications that deal gracefully with transient failures and leverage the Scala compiler to ensure correct error-handling behavior.

Databases and APIs go down all the time. Still, ZIO’s compositional schedules and retries ensure you can handle any such transient failure scenario without boilerplate and with precise behavior that can be tested deterministically and comprehensively in unit tests.

Not only does ZIO give you the ability to handle transient failures from external resources, but it also gives you the tools necessary to manage internal failures. In another feature that ZIO pioneered, ZIO has typed errors that mirror typed successes. Think of it like this: Just like how typed successes help you reason about successes — typed errors help you reason about failures.

Typed failures allow you to look at a snippet of code in your IDE and see if it can fail (which is the most important property to understand when writing resilient code) and how it can fail (which is critical when handling business and domain errors). If you handle an error using an operator like catchAll, your IDE will tell you the code can’t fail anymore.

With ZIO, the Scala compiler gives you everything you need to write bulletproof applications that perform according to specifications. Once you get used to this power, you can’t go back (it’s literally like coding without types).

Gain compositionality

The defining feature of good functional programming, compositionality, means you can solve problems by snapping together Lego-like building blocks without edge cases. You are creating managed resources, using them in parallel, timing out each worker, and timing out the whole computation. Then you must retry the entire thing using exponential backoff in the event of any individual failure: With ZIO, this only takes a few lines of code.

Compositional systems have the following property: Once you’ve developed a solution, if the business comes in and asks you to change the solution, you just go into the code and snap another piece here or there — without having to re-engineer the whole system.

With ZIO applications, a slight change in business requirements means a small change in the code. But, thanks to its roots in functional programming, you can refactor with reckless abandon, knowing that refactoring won’t change the behavior of your program.

ZIO helps you build modern applications that are scalable, asynchronous, parallel, concurrent, leak-free, efficient, streaming, testable, and resilient. You also gain compositionality.

It’s fun to write code using ZIO because you can understand it. You can test it and change it with confidence. It doesn’t matter what requirements are thrown at you — you’ve got it covered.

Next up: Where the ZIO Library Roadmap is Headed Next


John A. De Goes, Architect of the ZIO open source library

A mathematician by training but a software engineer by vocation, John A. De Goes has been professionally writing software for more than 25 years. John has contributed to dozens of open source projects written in functional programming languages, including ZIO, a library for asynchronous and concurrent programming in functional Scala. In addition to speaking at Strata, OSCON, BigData TechCon, NEScala, ScalaWorld, Scala IO, flatMap, Scalar Conf, LambdaConf, and many other conferences, John also published a variety of books on programming. Currently, John heads Ziverge Inc, a company committed to solving hard business problems using the power of functional programming.

Yes, We’re Open Source!

Learn more about how we make open source work in our highly regulated industry.

Learn More

Related Content