TLDR: The Go standard runtime allows for asynchronous coding while allowing the developer a simply synchronous interface.
I’ve worked professionally with Go for a couple of years and one thing that I have been repeatedly surprised about is how little attention has been given to what I consider Go’s most important feature.
When asked about Go’s most important features many folks will list Go’s simplicity, its C interoperability, compile speed, and concurrency. All of these are important features but what I consider the most important is almost invisible and can be quite subtle.
Let’s start with a code example showing the feature and see if you can guess it yourself?
Did you see it? There isn’t a lot of code there. To be honest the feature I’m talking about is demonstrated in this code, but since it is part of the standard Go runtime it is invisible. To better highlight the feature let’s look at an example of similar code written in Java.
Looks quite similar . . . except it’s not. Ignore that the code is longer, a good IDE will negate those tradeoffs. In order to make this code perform as well as the Go code you would need to do a LOT of extra work. Have you guessed it yet?
The feature I’m talking about is the Go runtimes handling of blocking goroutines. When I first started using Go, I saw goroutines as a more efficient variant on Java’s green threads but it never clicked that when I wrote code that looked like traditional blocking I/O that under the hood it was effectively more similar to async I/O. It is quite evident when doing blocking calls, Go is able to schedule work much more efficiently. To get this performance out of Java you would need to add threadpools, futures or some other async library.
But don’t take my word for it on performance, let’s run a quick performance test. These results were run using Apache Bench and run on the same machine with ample warmup time.
To make a long story short, the Java version hits ~21K requests per second while the Go version hits ~36K requests per second.
This is, of course, a synthetic example but is not far off from a pattern I use quite often. A network request comes in, it’s processed, routed to some code, that code does work, often another network or disk based request, and then the results are returned. In Go, all the blocking on network and disk is handled very efficiently. In Java and most other languages, you can get this benefit as well but it typically requires a lot more code, planning, and testing. It’s virtually free in Go. That is very powerful.
In languages like C you might use a library like libevent to get this type of behavior. In Java you might use threadpools and futures. In python you would rely on async/await. But in Go you get it for free, you get to write your code as if it’s synchronous and at any point blocking occurs the goroutine can be suspended and other goroutines that are ready to do work can be scheduled.
So, as a server engineer who does a lot of this pattern of work, I think it’s Go’s strongest and often misunderstood feature!
What do you think?
Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here