All Windows calculated with pyfda (Python Filter Design Analysis Tool)
https://github.com/chipmuenk/pyfda
Window = Kaiser
beta = 8.6 (Similar to a Blackman Window)
fc = 22.5kHz
-86dB by 23kHz
This also gets rid of Linear Interpolation which leaves only Low and High both being Windowed Sinc.
The CDN URLs list now includes spotifycdn.com which has a different
format. It was being erroneously interpreted using the scdn.co format
and trying to parse non-digit characters as a timestamp.
Also ignore expiry timestamps we can't parse for future new URLs.
The resample_factor_reciprocal also happens to be our
anti-alias cutoff. In this case it represents the minimum
output bandwidth needed to fully represent the input.
Cap the output bandwidth to 92%.
Even at 48kHz it still translates to 100% source bandwidth.
This just provides a little bit of anti-alias filtering.
There is more then likely nothing there to filter,
but it doesn't hurt or cost us anything to make sure.
Since we are including the pipeline latency in the position we need to seek to the correct position when going from paused to play.
We can also drop the ALSA and PulseAudio buffers instead of draining them since their latency's are factored in.
When started at boot as a service discovery may fail due to it
trying to bind to interfaces before the network is actually up.
This could be prevented in systemd by starting the service after
network-online.target but it requires that a wait-online.service is
also enabled which is not always the case since a wait-online.service
can potentially hang the boot process until it times out in certain situations.
This allows for discovery to retry every 10 secs in the 1st 60 secs of uptime
before giving up thus papering over the issue and not holding up the boot process.
Time impl's from f64 (as secs) so there's no need to manually calculate it beyond converting ms to sec.
If we grab the TimeBase in new we don't need to continually call decoder.codec_params().time_base everytime we want to convert ts to ms.
release-dist-optimized inherits from `release`. Useful if you're distributing librespot at part of a project. The diffrences are:
panic = "abort", Makes librespot abort instead of unwind and hang on a panic.
Extremely useful when running librespot unattended as a system service for example to allow for auto-restarts.
codegen-units = 1 and lto = true, Take slighly longer to compile but produce more optimized binaries.
Collect is probably fine but for code that's this hot it's worth the couple extra lines to make certain there's only ever one allocation when it comes to the returned Vec.
It would be so much easier to use elapsed but elapsed could potentially panic is rare cases.
See: https://doc.rust-lang.org/std/time/struct.Instant.html#monotonicity
Otherwise this is pretty straight forward.
If anything fails getting expected_position_ms it will return 0 which will trigger a notify if either stream_position_ms or decoder_position_ms is > 1000.
If all goes well it's simply a matter of calculating the max delta of expected_position_ms and stream_position_ms and expected_position_ms and decoder_position_ms.
So if the decoder or the sample pipeline are off by more than 1 sec we notify.