timersInit sets up the timers.
It need only be done once per peer.
timersStart does the work to prepare the timers
for a newly running peer. It needs to be done
every time a peer starts.
Separate the two and call them in the appropriate places.
This prevents data races on the peer's timers fields
when starting and stopping peers.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We already track this state elsewhere. No need to duplicate.
The cost of calling changeState is negligible.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The old code silently accepted negative MTUs.
It also set MTUs above the maximum.
It also had hard to follow deeply nested conditionals.
Add more paranoid handling,
and make the code more straight-line.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The TUN event reader does three things: Change MTU, device up, and device down.
Changing the MTU after the device is closed does no harm.
Device up and device down don't make sense after the device is closed,
but we can check that condition before proceeding with changeState.
There's thus no reason to block device.Close on RoutineTUNEventReader exiting.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This commit simplifies device state management.
It creates a single unified state variable and documents its semantics.
It also makes state changes more atomic.
As an example of the sort of bug that occurred due to non-atomic state changes,
the following sequence of events used to occur approximately every 2.5 million test runs:
* RoutineTUNEventReader received an EventDown event.
* It called device.Down, which called device.setUpDown.
* That set device.state.changing, but did not yet attempt to lock device.state.Mutex.
* Test completion called device.Close.
* device.Close locked device.state.Mutex.
* device.Close blocked on a call to device.state.stopping.Wait.
* device.setUpDown then attempted to lock device.state.Mutex and blocked.
Deadlock results. setUpDown cannot progress because device.state.Mutex is locked.
Until setUpDown returns, RoutineTUNEventReader cannot call device.state.stopping.Done.
Until device.state.stopping.Done gets called, device.state.stopping.Wait is blocked.
As long as device.state.stopping.Wait is blocked, device.state.Mutex cannot be unlocked.
This commit fixes that deadlock by holding device.state.mu
when checking that the device is not closed.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
It is no longer necessary, as of 454de6f3e64abd2a7bf9201579cd92eea5280996
(device: use channel close to shut down and drain decryption channel).
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The leak test had rare flakes.
If a system goroutine started at just the wrong moment, you'd get a false positive.
Instead of looping until the goroutines look good and then checking,
exit completely as soon as the number of goroutines looks good.
Also, check more frequently, in an attempt to complete faster.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Here is the old implementation:
type WaitPool struct {
c chan interface{}
}
func NewWaitPool(max uint32, new func() interface{}) *WaitPool {
p := &WaitPool{c: make(chan interface{}, max)}
for i := uint32(0); i < max; i++ {
p.c <- new()
}
return p
}
func (p *WaitPool) Get() interface{} {
return <- p.c
}
func (p *WaitPool) Put(x interface{}) {
p.c <- x
}
It performs worse than the new one:
name old time/op new time/op delta
WaitPool-16 16.4µs ± 5% 15.1µs ± 3% -7.86% (p=0.008 n=5+5)
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
benchmark old ns/op new ns/op delta
BenchmarkUAPIGet-16 2872 2157 -24.90%
benchmark old allocs new allocs delta
BenchmarkUAPIGet-16 30 18 -40.00%
benchmark old bytes new bytes delta
BenchmarkUAPIGet-16 737 256 -65.26%
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This moves to a simple queue with no routine processing it, to reduce
scheduler pressure.
This splits latency in half!
benchmark old ns/op new ns/op delta
BenchmarkThroughput-16 2394 2364 -1.25%
BenchmarkLatency-16 259652 120810 -53.47%
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This makes the IpcGet method much faster.
We also refactor the traversal API to use a callback so that we don't
need to allocate at all. Avoiding allocations we do self-masking on
insertion, which in turn means that split intermediate nodes require a
copy of the bits.
benchmark old ns/op new ns/op delta
BenchmarkUAPIGet-16 3243 2659 -18.01%
benchmark old allocs new allocs delta
BenchmarkUAPIGet-16 35 30 -14.29%
benchmark old bytes new bytes delta
BenchmarkUAPIGet-16 1218 737 -39.49%
This benchmark is good, though it's only for a pair of peers, each with
only one allowedips. As this grows, the delta expands considerably.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
There are very few cases, if any, in which a user only wants one of
these levels, so combine it into a single level.
While we're at it, reduce indirection on the loggers by using an empty
function rather than a nil function pointer. It's not like we have
retpolines anyway, and we were always calling through a function with a
branch prior, so this seems like a net gain.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This commit overhauls wireguard-go's logging.
The primary, motivating change is to use a function instead
of a *log.Logger as the basic unit of logging.
Using functions provides a lot more flexibility for
people to bring their own logging system.
It also introduces logging helper methods on Device.
These reduce line noise at the call site.
They also allow for log functions to be nil;
when nil, instead of generating a log line and throwing it away,
we don't bother generating it at all.
This spares allocation and pointless work.
This is a breaking change, although the fix required
of clients is fairly straightforward.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The declaration of err in
nextByte, err := buffered.ReadByte
shadows the declaration of err in
op, err := buffered.ReadString('\n')
above. As a result, the assignments to err in
err = ipcErrorf(ipc.IpcErrorInvalid, "trailing character in UAPI get: %c", nextByte)
and in
err = device.IpcGetOperation(buffered.Writer)
do not modify the correct err variable.
Found by staticcheck.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Plenty more to go, but a start:
name old time/op new time/op delta
UAPIGet-4 6.37µs ± 2% 5.56µs ± 1% -12.70% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
UAPIGet-4 1.98kB ± 0% 1.22kB ± 0% -38.71% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
UAPIGet-4 42.0 ± 0% 35.0 ± 0% -16.67% (p=0.000 n=10+10)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Unify the handling of unexpected UAPI errors.
The comment that says "should never happen" is incorrect;
this could happen due to I/O errors. Correct it.
Change error message capitalization for consistency.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The goal of this change is to make the structure
of IpcSetOperation easier to follow.
IpcSetOperation contains a small state machine:
It starts by configuring the device,
then shifts to configuring one peer at a time.
Having the code all in one giant method obscured that structure.
Split out the parts into helper functions and encapsulate the peer state.
This makes the overall structure more apparent.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Expand IPCError to contain a wrapped error,
and add a helper to make constructing such errors easier.
Add a defer-based "log on returned error" to IpcSetOperation.
This lets us simplify all of the error return paths.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Until we depend on Go 1.16 (which isn't released yet), alias our own
variable to the private member of the net package. This will allow an
easy find replace to make this go away when we eventually switch to
1.16.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>