Go tutorial: Doing good by writing bad code - part 1

After decades of programming in Java, for the past several years I have been mostly working in Go. Now, working in Go is great, primarily because the code is so easy to follow. Java had simplified the C++ programming model by removing multiple inheritance, manual memory management, and operator overloading. Go does the same and continues this trend towards a simple, straightforward programming style by removing inheritance entirely, as well as function overloading. Straightforward code is readable code and readable code is maintainable code. And that’s great for my company and co-workers.

Like all cultures, software engineering has its share of legends, of stories that are whispered at the water cooler. We’ve all heard rumors about developers who, rather than focusing on delivering the best products, are instead focused on job security. They don’t want maintainable code, because that means other people can understand what they are doing. Would that even be possible with Go? How would they make Go code as hard to follow as possible? It’s not easy. Let’s explore some options.

I’m sure you are thinking, “How badly can you abuse a programming language? Is it possible to write such horrible code in Go that someone could become unreplaceable?” Not to worry. When I was an undergraduate, I had a project where I maintained some Lisp code written by a grad student. He had managed to write Fortran using Lisp. The code looked something like this:

    (defun add-mult-pi (in1 in2)
    (setq a in1)
    (setq b in2)
    (setq c (+ a b))
    (setq d (* 3.1415 c)
    d
)
  

There were dozens of files with code like this. It was absolutely horrific and absolute genius. I spent months trying to make sense out of the code. Writing bad Go code is a piece of cake compared to that.

There are many ways to make code terrible, but we’ll just focus on a few. In order to properly do evil, you have to know how to do good. We’ll go through these points one at a time, look at how goody-goody Go programmers would do things, and then see how we can do the exact opposite.

Poor packaging

Packages are a nice place to start because they are so unassuming. How can organizing your code make a difference?

In Go, the name of the package is used to refer to the exported item; we write `fmt.Println` or `http.RegisterFunc`. Because the package name is so visible, good Go developers make sure that the package name describes what the exported items are. We shouldn’t have packages named util because we don’t want names like `util.JSONMarshal`, we want names like `json.Marshal`.

Good Go developers also don’t create a single DAO or model package. For those who aren’t familiar with the term, a DAO is a “data access object” — the layer of code that talks to your database. I used to work at a place where six Java services all imported the same library with the same DAOs to hit the same database that they all shared, because, you know, microservices?

If you have a single package with all of your DAOs, it becomes more likely that you’ll end up with a cyclic dependency between packages, which Go doesn’t allow. And if you have multiple services that bring that single DAO package in as a library, you also might end up with a situation where a change to support one service requires you to upgrade all of your services or else something will break. That’s called a distributed monolith and it’s incredibly difficult to do updates once you’ve built one.

Once you know how packaging is supposed to work and what it prevents, it’s easy to be evil. Organize your code badly and give your packages bad names. Break your code up into packages like model and util and dao. If you want to really test boundaries, see if you can get away with packages named after your pet or your favorite color. When people end up with cyclic dependencies or distributed monoliths because they tried to use your code in a slightly new way, you get to sigh and tell them they are doing it wrong.

Improper interfaces

Now that we have our packages all messed up, we can move on to interfaces. Interfaces in Go are not like interfaces in other languages. The fact that you don’t explicitly declare that a type implements an interface seems like a small detail at first, but it actually completely flips around the concept of interfaces.

In most languages with abstract types, the interface is defined before, or alongside, the implementation. You have to do this because sooner or later, you will need to swap in different implementations of an interface, if only for testing. If you don’t create the interface ahead of time, you can’t go back later and slip in an interface without breaking all of the code that uses the class, because they’ll have to be re-written to refer to the interface instead of the concrete type.

For this reason, Java code often has giant service interfaces written with lots of methods. Classes that depend on these interfaces then use the methods they need and ignore the rest. Writing tests is possible, but you’ve added an extra layer of abstraction and when writing tests, you often fall back to using tools to generate implementations of all those methods that you don’t care about.

In Go, the implicit interfaces define what methods you need to use. It’s the using code that owns the interface, not the providing code. Even if you are using a type with tons of methods defined on it, you can specify an interface that only includes the methods that you need. And different code that uses different parts of the same type will define different interfaces that only cover the functionality that they need. Usually, these interfaces only have a couple of methods.

This makes your code easier to understand, because not only does your method or function declaration define what data it needs, it also defines exactly what functionality it’s going to use. This is one reason why good Go developers follow the advice, “Accept interfaces, return structs.”

But, you know, just because it’s a good practice doesn’t mean you have to do it.

The best way to make your interfaces evil is to go back to what other languages do and define interfaces ahead of time, as part of the code that’s called. Define really big interfaces, with lots of methods, that are shared by all of the clients of the service. That makes it unclear which methods are actually needed, which creates complication, and complication is the friend of the evil programmer.

Pass pointers for population

Before I talk about what this means, we need to get a little philosophical. If you step back and think about it, every single program ever written does exactly the same thing. Every program takes in data, processes the data, and then sends the processed data someplace. This is true whether you are writing a payroll system, taking in HTTP requests and returning back web pages, or checking a joystick to see what button was pressed so you know what sort of move to show on-screen. Programs process data.

And if you look at programs this way, the most important thing you can do is make sure that it is easy to understand how the data is being transformed. And that’s why it’s a good idea to keep data as immutable as possible as it flows through your code. Because data that doesn’t change is data that’s easy to track.

In Go, we have reference types and value types. The difference between the two is whether the variable refers to the data or to the data’s location in memory. Pointers, slices, maps, channels, interfaces, and functions are reference types, and everything else is a value type. If you assign a variable of a value type to another variable, it makes a copy of the value; a change to one variable doesn’t change the other’s value.

Assigning a reference type variable to another reference type variable means they both share the same memory location, so if you change the data pointed to by one, you change the data pointed to by the other. This is true for both local variables, and for parameters to functions.

    func main() {
    //value types
    a := 1
    b := a
    b = 2
    fmt.Println(a, b) // prints 1 2
    //reference types
    c := &a
    *c = 3
    fmt.Println(a, b, *c) // prints 3 2 3
}
  

Good Go developers want to make it easy to understand how data is gathered. They make sure to use value parameters for functions as often as possible. Go doesn’t have a way to mark the fields in a struct final or parameters to a function as final, but if a function has value parameters, modifying the values of the parameters doesn’t change the value of the variables in the calling function. All the called function can do is return a value to the calling function. Given this, if you populate a struct by calling functions that take value parameters, you can then compose the values into the struct and it is clear exactly where each value in the struct came from.

    type Foo struct {
    A int
    B string
}
func getA() int {
    return 20
}
func getB(i int) string {
    return fmt.Sprintf("%d",i*2)
}
func main() {
    f := Foo{}
    f.A = getA()
    f.B = getB(f.A)
    //I know exactly what went into building f
    fmt.Println(f)
}
  

So how do we be evil? We do it by inverting this model.

Rather than calling functions that return values that we compose together, you pass a pointer to a struct into functions, and let them make changes to the struct. Since every function has the whole struct, the only way to know which fields are being modified is to look through all of the code. You can have invisible dependencies between the functions, too, with one function putting data in that a second function needs, but nothing in the code to indicate that you must call the first function first. If you build your data structures this way, you’ll be sure that no one else will understand what your code is doing.

    type Foo struct {
    A int
    B string
}
func setA(f *Foo) {
    f.A = 20
}
//Secret dependency on f.A hanging out here!
func setB(f *Foo) {
    f.B = fmt.Sprintf("%d", f.A*2)
}
func main() {
    f := Foo{}
    setA(&f)
    setB(&f)
    //Who knows what setA and setB
    //are doing or depending on?
    fmt.Println(f)
}
  

Propagating panics

And now we’re on to error handling. Maybe you’re thinking that it’s pretty evil to have programs that are roughly 75% error handling, and I wouldn’t say you were entirely wrong. Go code has a lot of error handling, front and center. And sure, it would be nice if there was a way to make it a bit less in your face. But errors happen and how you handle errors is what separates the professionals from the amateurs. Bad error handling produces unstable programs that are hard to debug and hard to maintain. Sometimes being good means doing the hard work.

    func (dus DBUserService) Load(id int) (User, error) {
    rows, err := dus.DB.Query("SELECT name FROM USERS WHERE ID = ?", id)
    if err != nil {
        return User{}, err
    }
    if !rows.Next() {
        return User{}, fmt.Errorf("no user for id %d", id)
    }
    var name string
    err = rows.Scan(&name)
    if err != nil {
        return User{}, err
    }
    err = rows.Close()
    if err != nil {
        return User{}, err
    }
    return User{Id: id, Name: name}, nil
}
  

Many languages, such as C++, Python, Ruby, and Java, use exceptions to handle errors. If something goes wrong, developers in those languages throw or raise an exception, in the expectation that some code somewhere will take care of it. Of course, this depends on the clients of the code knowing that it was even possible for the exception to be thrown, because, with the (no pun intended) exception of Java’s checked exceptions, there’s nothing in the function or method signature in languages with exceptions to tell you that an exception might happen. So how do developers know what exceptions to worry about? They have two options:

  • First, they can read through all of the source code of all the libraries that their code calls, and all the libraries that the libraries call, and so on.
  • Second, they can trust the documentation. Maybe I am jaded, but I personally find it hard to trust the documentation.

So how do we bring this brand of evil to Go? By abusing the panic and recover keywords. Panic is meant for situations like “disk disappeared” or “network card exploded.” It’s not for things like “someone passed a string instead of an int.” But it could be.

Unfortunately, other, less enlightened developers will be returning errors from their code, so here’s a little helper function, PanicIfErr. Use that to turn other developer’s errors into panics.

    func PanicIfErr(err error) {
    if err != nil {
        panic(err)
    }
}
  

You can use PanicIfErr to wrap other people’s errors, shrink your code down, no more ugly error handling. Anything that you would have made an error is now a panic. It’s so productive!

    func (dus DBUserService) LoadEvil(id int) User {
    rows, err := dus.DB.Query(
                 "SELECT name FROM USERS WHERE ID = ?", id)
    PanicIfErr(err)
    if !rows.Next() {
        panic(fmt.Sprintf("no user for id %d", id))
    }
    var name string
    PanicIfErr(rows.Scan(&name))
    PanicIfErr(rows.Close())
    return User{Id: id, Name: name}
}
  

You can put a recover somewhere near the top, maybe in your own middleware, and say that not only are you handling the errors, you have made everyone’s code cleaner too. Doing evil by looking like you are doing good is the best kind of evil.

    func PanicMiddleware(h http.Handler) http.Handler {
    return http.HandlerFunc(
        func(rw http.ResponseWriter, req *http.Request){
            defer func() {
                if r := recover(); r != nil {
                   fmt.Println("Yeah, something happened.")
                }
            }()
            h.ServeHTTP(rw, req)
        }
    )
}
  

Side effect set-up

Next we’re on to configure by side effect. Remember, a good Go developer wants to understand how data flows through their program. The best way to do that is to know what the data is flowing through, by explicitly configuring the dependencies in an application. Even things that meet the same interface may have very different behaviors to meet that interface, like the difference between code that stores data in memory versus code that calls a database to do the same work. However, there are ways to set up dependencies in Go without making explicit calls.

Just like many other languages, Go has a way to run code magically without invoking it directly. If you create a function called init with no parameters, the function automatically runs whenever its package is loaded. And, just to make it more confusing, if there are multiple functions named init in a single file, or across multiple files in a single package, they will all run.

    package account
type Account struct{
    Id int
    UserId int
}
func init() {
    fmt.Println("I run magically!")
}
func init() {
    fmt.Println("I also run magically, and I am also named init()")
}
  

Init functions are often paired with blank imports. Go has a special kind of import declaration that looks `import _ “github.com/lib/pq`. When you put a blank identifier as the name for an imported package, runs the init methods in the package, but doesn’t expose any of the package’s identifiers. For some Go libraries — like database drivers and image formats — you have to load them by including a blank import for the package somewhere in your application just to trigger the init function in the package so it can register some code.

    package main
import _ "github.com/lib/pq"
func main() {
    db, err := sql.Open(
        "postgres",
        "postgres://jon@localhost/evil?sslmode=disable")
}
  

Now, this is obviously a bad idea. When you use init functions, you have code that runs magically, completely outside the control of the developer. Go best practices discourage the use of init functions. They are an obscure feature, they obfuscate code flow, and they are easy to hide in a library.

In other words, init functions are perfect for our evil purposes. Rather than having explicit configuration or registration of items in your packages, you can use init functions and blank imports to set up your application’s state. In this example, we’re making account available to the rest of the application via a registry and the account package puts itself into the registry using an init function.

    package account
import (
    "fmt"
    "github.com/evil-go/example/registry"
)
type StubAccountService struct {}
func (a StubAccountService) GetBalance(accountId int) int {
    return 1000000
}
func init() {
    registry.Register("account", StubAccountService{})
}
  

If you want to use an account, you put a blank import somewhere in your program. It doesn’t have to be main, it doesn’t have to be related code, it just has to be *somewhere*. It’s magic!

    package main
import (
    _ "github.com/evil-go/example/account"
   "github.com/evil-go/example/registry"
)
type Balancer interface {
    GetBalance(int) int
}
func main() {
    a := registry.Get("account").(Balancer)
    money := a.GetBalance(12345)
}
  

If you use inits in your libraries to set up your dependencies, you will watch other developers scratch their heads, wondering how those dependencies were set up and how to change them. And no one but you will be the wiser.

Complicated configuration

There are even more things we can do with configuration. If you’re being a good Go developer, you want to isolate the configuration from the rest of the program. In the main() function, you capture properties from the environment and convert them into the values that are needed by the components that are explicitly wired together. Your components don’t know anything about property files or how those properties are named. For simple components, you set public properties and in more complicated situations, you can create a factory function that takes in the configuration information and returns a properly configured component.

    func main() {
    b, err := ioutil.ReadFile("account.json")
    if err != nil {
    fmt.Errorf("error reading config file: %v", err)
    os.Exit(1)
    }
    m := map[string]interface{}{}
    json.Unmarshal(b, &m)
    prefix := m["account.prefix"].(string)
    maker := account.NewMaker(prefix)
}
type Maker struct {
    prefix string
}
func (m Maker) NewAccount(name string) Account {
    return Account{Name: name, Id: m.prefix + "-12345"}
}
func NewMaker(prefix string) Maker {
    return Maker{prefix: prefix}
}
  

But evil developers know that it’s best to sprinkle configuration information throughout the entire program. Rather than having a single function in your package that defines the names and types of values that your package needs, use a function that takes in a map of string to string and convert them yourself.

If that seems too obviously hostile, use an init function to load a property file from inside of your package and set up the values yourself. It may seem like you have made life easier for other developers, but you know better.

With an init function, you can define new properties deep within the code and no one will ever find them until they get to production and everything crashes because something is missing from one of the dozen different properties files that are required in order to launch correctly. If you want extra evil powers, you can offer to set up a wiki to track all of the properties across all of the libraries and “forget” to include new properties periodically. As the keeper of the properties, you become the only person who can get the software to run.

    func (m maker) NewAccount(name string) Account {
    return Account{Name: name, Id: m.prefix + "-12345"}
}
var Maker maker
func init() {
    b, _ := ioutil.ReadFile("account.json")
    m := map[string]interface{}{}
    json.Unmarshal(b, &m)
    Maker.prefix = m["account.prefix"].(string)
}
  

Frameworks for functionality

Finally, we come to frameworks vs. libraries. The difference is subtle. It’s not just a size thing; you can have large libraries and small frameworks. A framework calls your code, while you call a library’s code. Frameworks require you to write your code in a certain way, whether it’s naming your methods just so, or making sure they meet certain interfaces, or making you register your code with the framework. Frameworks dump their requirements all over your code. In general, frameworks own you.

Go encourages libraries because libraries are composable. While, sure, every library expects data to be passed in in a certain format, you can write a little bit of glue code to massage the output of one library into the input for another.

With frameworks, it’s hard to get them to play together nicely, because each framework wants complete control over the lifecycle of the code that runs inside of it. Oftentimes, the only way to get frameworks to work together is for the authors of the frameworks to get together and put in explicit support for each other. And the best way to use the evil of frameworks to gain lasting power: write a custom framework that is only used in-house.

The once and future evil

Once you have mastered these techniques, you will be well on the path to evil. In my next blog post, I’ll show you one way to deploy all of this evil, and what it looks like when you convert good code to evil.

Read part 2 here.


Jon Bodner, Senior Distinguished Engineer, Tech Commercialization

Jon Bodner has been developing software professionally for over 20 years as a programmer, chief engineer, and architect. He has worked in a wide variety of industries: government, education, internet, legal, financial, and e-commerce. His work has taken him from the internals of the JVM to the DNS resolvers that shape internet traffic. A veteran of several startups, Jon was also the architect for Blackboard's Beyond division, which was the company's first foray into cloud services. He holds a BS in Computer Science from Rensselaer Polytechnic Institute and an MS in Computer Science from the University of Wisconsin-Madison.

Related Content

Software Engineering

Learning to Use Go Reflection