Go – Concurrency Explained: Goroutines, Channels & Patterns (2025 Guide)
Introduction – Why Concurrency Matters in Go?
Go was built with concurrency in mind. Thanks to goroutines and channels, Go allows developers to write highly concurrent applications in a safe, lightweight, and easy-to-read manner. Whether you’re building web servers, data pipelines, or network tools, Go’s concurrency model offers scalability without complexity.
In this section, you’ll learn:
- How to create and use goroutines
- Communicate safely with channels
- Use buffered vs unbuffered channels
- Real-world concurrency patterns and best practices
What Is a Goroutine?
A goroutine is a lightweight thread managed by the Go runtime. You can spawn thousands of them with minimal overhead.
package main
import (
"fmt"
"time"
)
func sayHello() {
fmt.Println("Hello from goroutine")
}
func main() {
go sayHello() // starts a new goroutine
time.Sleep(1 * time.Second) // wait to see output
}
Output:
Hello from goroutine
Use go before a function call to start it as a concurrent task.
Channels – Communicating Between Goroutines
Channels allow goroutines to safely exchange data.
ch := make(chan string)
go func() {
ch <- "Hello"
}()
msg := <-ch
fmt.Println(msg) // Output: Hello
ch <- sends data
<-ch receives data
Buffered Channels
ch := make(chan int, 2)
ch <- 1
ch <- 2
fmt.Println(<-ch) // 1
fmt.Println(<-ch) // 2
Buffered channels do not block until full. Useful for bursty communication.
Directional Channels (Read-Only / Write-Only)
func sendOnly(ch chan<- int) {
ch <- 10
}
func recvOnly(ch <-chan int) {
fmt.Println(<-ch)
}
Helps prevent misuse of channels and improves function contracts.
select – Wait on Multiple Channel Ops
ch1 := make(chan string)
ch2 := make(chan string)
go func() { ch1 <- "one" }()
go func() { ch2 <- "two" }()
select {
case msg1 := <-ch1:
fmt.Println("Received:", msg1)
case msg2 := <-ch2:
fmt.Println("Received:", msg2)
}
select lets you wait on multiple channels, executing the first that’s ready.
⛓️ Close a Channel
ch := make(chan int)
go func() {
for i := 1; i <= 3; i++ {
ch <- i
}
close(ch)
}()
for val := range ch {
fmt.Println(val)
}
Output:
1
2
3
close(ch) indicates no more values will be sent, allowing graceful termination.
WaitGroup – Wait for Multiple Goroutines
import "sync"
var wg sync.WaitGroup
func worker(id int) {
fmt.Println("Worker", id, "starting")
time.Sleep(1 * time.Second)
fmt.Println("Worker", id, "done")
wg.Done()
}
func main() {
for i := 1; i <= 3; i++ {
wg.Add(1)
go worker(i)
}
wg.Wait()
}
sync.WaitGroup is used to wait for goroutines to finish.
Best Practices
| Practice | Reason |
|---|---|
| Use channels for communication | Ensures safe, synchronized data sharing |
| Don’t block goroutines | Leads to deadlocks or hanging programs |
Use select for multiplexing | Cleanly handles multiple input sources |
| Close channels when done | Prevents memory leaks and stuck receivers |
| Avoid global goroutines | Hard to trace and manage |
Summary – Recap & Next Steps
Go’s concurrency model makes parallelism simple using goroutines and channels. With minimal syntax, you can build robust concurrent applications that are lightweight and efficient.
Key Takeaways:
- Use
gokeyword to start a goroutine - Use channels (
chan) for safe data exchange - Use
selectto handle multiple channel operations - Use
sync.WaitGroupto synchronize completion - Avoid race conditions and deadlocks with design clarity
Next: Explore Mutexes & Race Conditions, Context for Cancellation, or build Concurrent Web Crawlers.
FAQs – Go Concurrency
How many goroutines can I run in Go?
Thousands. Goroutines are lightweight and stack-managed by the Go runtime.
What’s the difference between goroutines and threads?
Goroutines are cheaper, managed by Go, and not OS threads.
What happens if I don’t receive from a channel?
The goroutine that sends will block until it’s received—may cause a deadlock.
When should I close a channel?
When no more values will be sent—never close from a receiver.
Is concurrency in Go parallel?
Not always. Concurrency is about structure; parallelism depends on CPU availability and runtime scheduling.
Share Now :
