lib/promscrape: use the standard net/http.Client instead of fasthttp.Client for scraping targets in non-streaming mode

While fasthttp.Client uses less CPU and RAM when scraping targets with small responses (up to 10K metrics),
it doesn't work well when scraping targets with big responses such as kube-state-metrics.
In this case it could use big amounts of additional memory comparing to net/http.Client,
since fasthttp.Client reads the full response in memory and then tries re-using the large buffer
for further scrapes.

Additionally, fasthttp.Client-based scraping had various issues with proxying, redirects
and scrape timeouts like the following ones:

- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1945
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5425
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2794
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1017

This should help reducing memory usage for the case when target returns big response
and this response is scraped by fasthttp.Client at first before switching to stream parsing mode
for subsequent scrapes. Now the switch to stream parsing mode is performed on the first scrape
after reading the response body in memory and noticing that its size exceeds the value passed
to -promscrape.minResponseSizeForStreamParse command-line flag.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5567

Overrides https://github.com/VictoriaMetrics/VictoriaMetrics/pull/4931
This commit is contained in:
Aliaksandr Valialkin 2024-01-30 17:51:44 +02:00
parent a20c289228
commit bc7cf4950b
No known key found for this signature in database
GPG key ID: 52C003EE2BCDB9EB
63 changed files with 151 additions and 15289 deletions

View file

@ -771,12 +771,13 @@ e.g. it sets `scrape_series_added` metric to zero. See [these docs](#automatical
## Stream parsing mode
By default, `vmagent` reads the full response body from scrape target into memory, then parses it, applies [relabeling](#relabeling)
and then pushes the resulting metrics to the configured `-remoteWrite.url`. This mode works good for the majority of cases
when the scrape target exposes small number of metrics (e.g. less than 10 thousand). But this mode may take big amounts of memory
when the scrape target exposes big number of metrics. In this case it is recommended enabling stream parsing mode.
When this mode is enabled, then `vmagent` reads response from scrape target in chunks, then immediately processes every chunk
and pushes the processed metrics to remote storage. This allows saving memory when scraping targets that expose millions of metrics.
By default, `vmagent` parses the full response from the scrape target, applies [relabeling](#relabeling)
and then pushes the resulting metrics to the configured `-remoteWrite.url` in one go. This mode works good for the majority of cases
when the scrape target exposes small number of metrics (e.g. less than 10K). But this mode may take big amounts of memory
when the scrape target exposes big number of metrics (for example, when `vmagent` scrapes [`kube-state-metrics`](https://github.com/kubernetes/kube-state-metrics)
in large Kubernetes cluster). It is recommended enabling stream parsing mode for such targets.
When this mode is enabled, `vmagent` processes the response from the scrape target in chunks.
This allows saving memory when scraping targets that expose millions of metrics.
Stream parsing mode is automatically enabled for scrape targets returning response bodies with sizes bigger than
the `-promscrape.minResponseSizeForStreamParse` command-line flag value. Additionally,

6
go.mod
View file

@ -8,10 +8,6 @@ require (
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.2.1
github.com/VictoriaMetrics/easyproto v0.1.4
github.com/VictoriaMetrics/fastcache v1.12.2
// Do not use the original github.com/valyala/fasthttp because of issues
// like https://github.com/valyala/fasthttp/commit/996610f021ff45fdc98c2ce7884d5fa4e7f9199b
github.com/VictoriaMetrics/fasthttp v1.2.0
github.com/VictoriaMetrics/metrics v1.31.0
github.com/VictoriaMetrics/metricsql v0.70.0
github.com/aws/aws-sdk-go-v2 v1.24.1
@ -34,7 +30,7 @@ require (
github.com/valyala/gozstd v1.20.1
github.com/valyala/histogram v1.2.0
github.com/valyala/quicktemplate v1.7.0
golang.org/x/net v0.20.0
golang.org/x/net v0.20.0 // indirect
golang.org/x/oauth2 v0.16.0
golang.org/x/sys v0.16.0
google.golang.org/api v0.159.0

2
go.sum
View file

@ -63,8 +63,6 @@ github.com/VictoriaMetrics/easyproto v0.1.4 h1:r8cNvo8o6sR4QShBXQd1bKw/VVLSQma/V
github.com/VictoriaMetrics/easyproto v0.1.4/go.mod h1:QlGlzaJnDfFd8Lk6Ci/fuLxfTo3/GThPs2KH23mv710=
github.com/VictoriaMetrics/fastcache v1.12.2 h1:N0y9ASrJ0F6h0QaC3o6uJb3NIZ9VKLjCM7NQbSmF7WI=
github.com/VictoriaMetrics/fastcache v1.12.2/go.mod h1:AmC+Nzz1+3G2eCPapF6UcsnkThDcMsQicp4xDukwJYI=
github.com/VictoriaMetrics/fasthttp v1.2.0 h1:nd9Wng4DlNtaI27WlYh5mGXCJOmee/2c2blTJwfyU9I=
github.com/VictoriaMetrics/fasthttp v1.2.0/go.mod h1:zv5YSmasAoSyv8sBVexfArzFDIGGTN4TfCKAtAw7IfE=
github.com/VictoriaMetrics/metrics v1.24.0/go.mod h1:eFT25kvsTidQFHb6U0oa0rTrDRdz4xTYjpL8+UPohys=
github.com/VictoriaMetrics/metrics v1.31.0 h1:X6+nBvAP0UB+GjR0Ht9hhQ3pjL1AN4b8dt9zFfzTsUo=
github.com/VictoriaMetrics/metrics v1.31.0/go.mod h1:r7hveu6xMdUACXvB8TYdAj8WEsKzWB0EkpJN+RDtOf8=

View file

@ -14,7 +14,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs/fscore"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/fasthttp"
"github.com/cespare/xxhash/v2"
"golang.org/x/oauth2"
"golang.org/x/oauth2/clientcredentials"
@ -343,24 +342,6 @@ func (ac *Config) SetHeaders(req *http.Request, setAuthHeader bool) error {
return nil
}
// SetFasthttpHeaders sets the configured ac headers to req.
func (ac *Config) SetFasthttpHeaders(req *fasthttp.Request, setAuthHeader bool) error {
reqHeaders := &req.Header
for _, h := range ac.headers {
reqHeaders.Set(h.key, h.value)
}
if setAuthHeader {
ah, err := ac.GetAuthHeader()
if err != nil {
return fmt.Errorf("failed to obtain Authorization request header: %w", err)
}
if ah != "" {
reqHeaders.Set("Authorization", ah)
}
}
return nil
}
// GetAuthHeader returns optional `Authorization: ...` http header.
func (ac *Config) GetAuthHeader() (string, error) {
if f := ac.getAuthHeaderCached; f != nil {

View file

@ -5,7 +5,6 @@ import (
"net/http/httptest"
"testing"
"github.com/VictoriaMetrics/fasthttp"
"gopkg.in/yaml.v2"
)
@ -307,12 +306,6 @@ func TestConfigGetAuthHeaderFailure(t *testing.T) {
t.Fatalf("expecting non-nil error from SetHeaders()")
}
// Verify that cfg.SetFasthttpHeaders() returns error
var fhreq fasthttp.Request
if err := cfg.SetFasthttpHeaders(&fhreq, true); err == nil {
t.Fatalf("expecting non-nil error from SetFasthttpHeaders()")
}
// Verify that the tls cert cannot be loaded properly if it exists
if f := cfg.getTLSCertCached; f != nil {
cert, err := f(nil)
@ -421,16 +414,6 @@ func TestConfigGetAuthHeaderSuccess(t *testing.T) {
if ah != ahExpected {
t.Fatalf("unexpected auth header from net/http request; got %q; want %q", ah, ahExpected)
}
// Make sure that cfg.SetFasthttpHeaders() properly set Authorization header
var fhreq fasthttp.Request
if err := cfg.SetFasthttpHeaders(&fhreq, true); err != nil {
t.Fatalf("unexpected error in SetFasthttpHeaders(): %s", err)
}
ahb := fhreq.Header.Peek("Authorization")
if string(ahb) != ahExpected {
t.Fatalf("unexpected auth header from fasthttp request; got %q; want %q", ahb, ahExpected)
}
}
// Zero config
@ -578,16 +561,6 @@ func TestConfigHeaders(t *testing.T) {
t.Fatalf("unexpected value for net/http header %q; got %q; want %q", h.key, v, h.value)
}
}
var fhreq fasthttp.Request
if err := c.SetFasthttpHeaders(&fhreq, false); err != nil {
t.Fatalf("unexpected error in SetFasthttpHeaders(): %s", err)
}
for _, h := range headersParsed {
v := fhreq.Header.Peek(h.key)
if string(v) != h.value {
t.Fatalf("unexpected value for fasthttp header %q; got %q; want %q", h.key, v, h.value)
}
}
}
f(nil, "")
f([]string{"foo: bar"}, "foo: bar\r\n")

View file

@ -13,10 +13,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
"github.com/VictoriaMetrics/fasthttp"
"github.com/VictoriaMetrics/metrics"
)
@ -37,57 +33,16 @@ var (
)
type client struct {
// hc is the default client optimized for common case of scraping targets with moderate number of metrics.
hc *fasthttp.HostClient
// sc (aka `stream client`) is used instead of hc if ScrapeWork.StreamParse is set.
// It may be useful for scraping targets with millions of metrics per target.
sc *http.Client
c *http.Client
ctx context.Context
scrapeURL string
scrapeTimeoutSecondsStr string
hostPort string
requestURI string
setHeaders func(req *http.Request) error
setProxyHeaders func(req *http.Request) error
setFasthttpHeaders func(req *fasthttp.Request) error
setFasthttpProxyHeaders func(req *fasthttp.Request) error
denyRedirects bool
disableCompression bool
disableKeepAlive bool
}
func addMissingPort(addr string, isTLS bool) string {
if strings.Contains(addr, ":") {
return addr
}
if isTLS {
return concatTwoStrings(addr, ":443")
}
return concatTwoStrings(addr, ":80")
}
func concatTwoStrings(x, y string) string {
bb := bbPool.Get()
b := bb.B[:0]
b = append(b, x...)
b = append(b, y...)
s := bytesutil.InternBytes(b)
bb.B = b
bbPool.Put(bb)
return s
}
const scrapeUserAgent = "vm_promscrape"
func newClient(ctx context.Context, sw *ScrapeWork) (*client, error) {
var u fasthttp.URI
u.Update(sw.ScrapeURL)
hostPort := string(u.Host())
dialAddr := hostPort
requestURI := string(u.RequestURI())
isTLS := string(u.Scheme()) == "https"
isTLS := strings.HasPrefix(sw.ScrapeURL, "https://")
var tlsCfg *tls.Config
if isTLS {
var err error
@ -96,59 +51,31 @@ func newClient(ctx context.Context, sw *ScrapeWork) (*client, error) {
return nil, fmt.Errorf("cannot initialize tls config: %w", err)
}
}
setProxyHeaders := func(req *http.Request) error { return nil }
setFasthttpProxyHeaders := func(req *fasthttp.Request) error { return nil }
setHeaders := func(req *http.Request) error {
return sw.AuthConfig.SetHeaders(req, true)
}
setProxyHeaders := func(req *http.Request) error {
return nil
}
proxyURL := sw.ProxyURL
if !isTLS && proxyURL.IsHTTPOrHTTPS() {
// Send full sw.ScrapeURL in requests to a proxy host for non-TLS scrape targets
// like net/http package from Go does.
// See https://en.wikipedia.org/wiki/Proxy_server#Web_proxy_servers
pu := proxyURL.GetURL()
dialAddr = pu.Host
requestURI = sw.ScrapeURL
isTLS = pu.Scheme == "https"
if isTLS {
if pu.Scheme == "https" {
var err error
tlsCfg, err = sw.ProxyAuthConfig.NewTLSConfig()
if err != nil {
return nil, fmt.Errorf("cannot initialize proxy tls config: %w", err)
}
}
proxyURLOrig := proxyURL
setProxyHeaders = func(req *http.Request) error {
return proxyURLOrig.SetHeaders(sw.ProxyAuthConfig, req)
return proxyURL.SetHeaders(sw.ProxyAuthConfig, req)
}
setFasthttpProxyHeaders = func(req *fasthttp.Request) error {
return proxyURLOrig.SetFasthttpHeaders(sw.ProxyAuthConfig, req)
}
proxyURL = &proxy.URL{}
}
hostPort = addMissingPort(hostPort, isTLS)
dialAddr = addMissingPort(dialAddr, isTLS)
dialFunc, err := newStatDialFunc(proxyURL, sw.ProxyAuthConfig)
if err != nil {
return nil, fmt.Errorf("cannot create dial func: %w", err)
}
hc := &fasthttp.HostClient{
Addr: dialAddr,
// Name used in User-Agent request header
Name: scrapeUserAgent,
Dial: dialFunc,
IsTLS: isTLS,
TLSConfig: tlsCfg,
MaxIdleConnDuration: 2 * sw.ScrapeInterval,
ReadTimeout: sw.ScrapeTimeout,
WriteTimeout: 10 * time.Second,
MaxResponseBodySize: maxScrapeSize.IntN(),
MaxIdempotentRequestAttempts: 1,
ReadBufferSize: maxResponseHeadersSize.IntN(),
}
var sc *http.Client
var proxyURLFunc func(*http.Request) (*url.URL, error)
if pu := sw.ProxyURL.GetURL(); pu != nil {
proxyURLFunc = http.ProxyURL(pu)
}
sc = &http.Client{
hc := &http.Client{
Transport: &http.Transport{
TLSClientConfig: tlsCfg,
Proxy: proxyURLFunc,
@ -163,41 +90,29 @@ func newClient(ctx context.Context, sw *ScrapeWork) (*client, error) {
Timeout: sw.ScrapeTimeout,
}
if sw.DenyRedirects {
sc.CheckRedirect = func(req *http.Request, via []*http.Request) error {
hc.CheckRedirect = func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
}
c := &client{
hc: hc,
c: hc,
ctx: ctx,
sc: sc,
scrapeURL: sw.ScrapeURL,
scrapeTimeoutSecondsStr: fmt.Sprintf("%.3f", sw.ScrapeTimeout.Seconds()),
hostPort: hostPort,
requestURI: requestURI,
setHeaders: func(req *http.Request) error {
return sw.AuthConfig.SetHeaders(req, true)
},
setHeaders: setHeaders,
setProxyHeaders: setProxyHeaders,
setFasthttpHeaders: func(req *fasthttp.Request) error {
return sw.AuthConfig.SetFasthttpHeaders(req, true)
},
setFasthttpProxyHeaders: setFasthttpProxyHeaders,
denyRedirects: sw.DenyRedirects,
disableCompression: sw.DisableCompression,
disableKeepAlive: sw.DisableKeepAlive,
}
return c, nil
}
func (c *client) GetStreamReader() (*streamReader, error) {
deadline := time.Now().Add(c.sc.Timeout)
func (c *client) ReadData(dst *bytesutil.ByteBuffer) error {
deadline := time.Now().Add(c.c.Timeout)
ctx, cancel := context.WithDeadline(c.ctx, deadline)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, c.scrapeURL, nil)
if err != nil {
cancel()
return nil, fmt.Errorf("cannot create request for %q: %w", c.scrapeURL, err)
return fmt.Errorf("cannot create request for %q: %w", c.scrapeURL, err)
}
// The following `Accept` header has been copied from Prometheus sources.
// See https://github.com/prometheus/prometheus/blob/f9d21f10ecd2a343a381044f131ea4e46381ce09/scrape/scrape.go#L532 .
@ -208,236 +123,59 @@ func (c *client) GetStreamReader() (*streamReader, error) {
// Set X-Prometheus-Scrape-Timeout-Seconds like Prometheus does, since it is used by some exporters such as PushProx.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1179#issuecomment-813117162
req.Header.Set("X-Prometheus-Scrape-Timeout-Seconds", c.scrapeTimeoutSecondsStr)
req.Header.Set("User-Agent", scrapeUserAgent)
req.Header.Set("User-Agent", "vm_promscrape")
if err := c.setHeaders(req); err != nil {
cancel()
return nil, fmt.Errorf("failed to set request headers for %q: %w", c.scrapeURL, err)
return fmt.Errorf("failed to set request headers for %q: %w", c.scrapeURL, err)
}
if err := c.setProxyHeaders(req); err != nil {
cancel()
return nil, fmt.Errorf("failed to set proxy request headers for %q: %w", c.scrapeURL, err)
return fmt.Errorf("failed to set proxy request headers for %q: %w", c.scrapeURL, err)
}
scrapeRequests.Inc()
resp, err := c.sc.Do(req)
resp, err := c.c.Do(req)
if err != nil {
cancel()
return nil, fmt.Errorf("cannot scrape %q: %w", c.scrapeURL, err)
if ue, ok := err.(*url.Error); ok && ue.Timeout() {
scrapesTimedout.Inc()
}
return fmt.Errorf("cannot perform request to %q: %w", c.scrapeURL, err)
}
if resp.StatusCode != http.StatusOK {
metrics.GetOrCreateCounter(fmt.Sprintf(`vm_promscrape_scrapes_total{status_code="%d"}`, resp.StatusCode)).Inc()
respBody, _ := io.ReadAll(resp.Body)
_ = resp.Body.Close()
cancel()
return nil, fmt.Errorf("unexpected status code returned when scraping %q: %d; expecting %d; response body: %q",
return fmt.Errorf("unexpected status code returned when scraping %q: %d; expecting %d; response body: %q",
c.scrapeURL, resp.StatusCode, http.StatusOK, respBody)
}
scrapesOK.Inc()
sr := &streamReader{
r: resp.Body,
cancel: cancel,
scrapeURL: c.scrapeURL,
maxBodySize: int64(c.hc.MaxResponseBodySize),
}
return sr, nil
}
// checks fasthttp status code for redirect as standard http/client does.
func isStatusRedirect(statusCode int) bool {
switch statusCode {
case 301, 302, 303, 307, 308:
return true
// Read the data from resp.Body
r := &io.LimitedReader{
R: resp.Body,
N: maxScrapeSize.N,
}
return false
}
func (c *client) ReadData(dst []byte) ([]byte, error) {
deadline := time.Now().Add(c.hc.ReadTimeout)
req := fasthttp.AcquireRequest()
req.SetRequestURI(c.requestURI)
req.Header.SetHost(c.hostPort)
// The following `Accept` header has been copied from Prometheus sources.
// See https://github.com/prometheus/prometheus/blob/f9d21f10ecd2a343a381044f131ea4e46381ce09/scrape/scrape.go#L532 .
// This is needed as a workaround for scraping stupid Java-based servers such as Spring Boot.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/608 for details.
// Do not bloat the `Accept` header with OpenMetrics shit, since it looks like dead standard now.
req.Header.Set("Accept", "text/plain;version=0.0.4;q=1,*/*;q=0.1")
// Set X-Prometheus-Scrape-Timeout-Seconds like Prometheus does, since it is used by some exporters such as PushProx.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1179#issuecomment-813117162
req.Header.Set("X-Prometheus-Scrape-Timeout-Seconds", c.scrapeTimeoutSecondsStr)
if err := c.setFasthttpHeaders(req); err != nil {
return nil, fmt.Errorf("failed to set request headers for %q: %w", c.scrapeURL, err)
}
if err := c.setFasthttpProxyHeaders(req); err != nil {
return nil, fmt.Errorf("failed to set proxy request headers for %q: %w", c.scrapeURL, err)
}
if !*disableCompression && !c.disableCompression {
req.Header.Set("Accept-Encoding", "gzip")
}
if *disableKeepAlive || c.disableKeepAlive {
req.SetConnectionClose()
}
resp := fasthttp.AcquireResponse()
swapResponseBodies := len(dst) == 0
if swapResponseBodies {
// An optimization: write response directly to dst.
// This should reduce memory usage when scraping big targets.
dst = resp.SwapBody(dst)
}
ctx, cancel := context.WithDeadline(c.ctx, deadline)
defer cancel()
err := doRequestWithPossibleRetry(ctx, c.hc, req, resp)
statusCode := resp.StatusCode()
redirectsCount := 0
for err == nil && isStatusRedirect(statusCode) {
if redirectsCount > 5 {
err = fmt.Errorf("too many redirects")
break
}
if c.denyRedirects {
err = fmt.Errorf("cannot follow redirects if `follow_redirects: false` is set")
break
}
// It is expected that the redirect is made on the same host.
// Otherwise it won't work.
location := resp.Header.Peek("Location")
if len(location) == 0 {
err = fmt.Errorf("missing Location header")
break
}
req.URI().UpdateBytes(location)
err = doRequestWithPossibleRetry(ctx, c.hc, req, resp)
statusCode = resp.StatusCode()
redirectsCount++
}
if swapResponseBodies {
dst = resp.SwapBody(dst)
}
fasthttp.ReleaseRequest(req)
_, err = dst.ReadFrom(r)
_ = resp.Body.Close()
cancel()
if err != nil {
fasthttp.ReleaseResponse(resp)
if err == fasthttp.ErrTimeout {
if ue, ok := err.(*url.Error); ok && ue.Timeout() {
scrapesTimedout.Inc()
return dst, fmt.Errorf("error when scraping %q with timeout %s: %w", c.scrapeURL, c.hc.ReadTimeout, err)
}
if err == fasthttp.ErrBodyTooLarge {
return fmt.Errorf("cannot read data from %s: %w", c.scrapeURL, err)
}
if int64(len(dst.B)) >= maxScrapeSize.N {
maxScrapeSizeExceeded.Inc()
return dst, fmt.Errorf("the response from %q exceeds -promscrape.maxScrapeSize=%d; "+
"either reduce the response size for the target or increase -promscrape.maxScrapeSize", c.scrapeURL, maxScrapeSize.N)
return fmt.Errorf("the response from %q exceeds -promscrape.maxScrapeSize=%d; "+
"either reduce the response size for the target or increase -promscrape.maxScrapeSize command-line flag value", c.scrapeURL, maxScrapeSize.N)
}
return dst, fmt.Errorf("error when scraping %q: %w", c.scrapeURL, err)
}
if ce := resp.Header.Peek("Content-Encoding"); string(ce) == "gzip" {
var err error
if swapResponseBodies {
zb := gunzipBufPool.Get()
zb.B, err = fasthttp.AppendGunzipBytes(zb.B[:0], dst)
dst = append(dst[:0], zb.B...)
gunzipBufPool.Put(zb)
} else {
dst, err = fasthttp.AppendGunzipBytes(dst, resp.Body())
}
if err != nil {
fasthttp.ReleaseResponse(resp)
scrapesGunzipFailed.Inc()
return dst, fmt.Errorf("cannot ungzip response from %q: %w", c.scrapeURL, err)
}
scrapesGunzipped.Inc()
} else if !swapResponseBodies {
dst = append(dst, resp.Body()...)
}
fasthttp.ReleaseResponse(resp)
if len(dst) > c.hc.MaxResponseBodySize {
maxScrapeSizeExceeded.Inc()
return dst, fmt.Errorf("the response from %q exceeds -promscrape.maxScrapeSize=%d (the actual response size is %d bytes); "+
"either reduce the response size for the target or increase -promscrape.maxScrapeSize", c.scrapeURL, maxScrapeSize.N, len(dst))
}
if statusCode != fasthttp.StatusOK {
metrics.GetOrCreateCounter(fmt.Sprintf(`vm_promscrape_scrapes_total{status_code="%d"}`, statusCode)).Inc()
return dst, fmt.Errorf("unexpected status code returned when scraping %q: %d; expecting %d; response body: %q",
c.scrapeURL, statusCode, fasthttp.StatusOK, dst)
}
scrapesOK.Inc()
return dst, nil
return nil
}
var gunzipBufPool bytesutil.ByteBufferPool
var (
maxScrapeSizeExceeded = metrics.NewCounter(`vm_promscrape_max_scrape_size_exceeded_errors_total`)
scrapesTimedout = metrics.NewCounter(`vm_promscrape_scrapes_timed_out_total`)
scrapesOK = metrics.NewCounter(`vm_promscrape_scrapes_total{status_code="200"}`)
scrapesGunzipped = metrics.NewCounter(`vm_promscrape_scrapes_gunziped_total`)
scrapesGunzipFailed = metrics.NewCounter(`vm_promscrape_scrapes_gunzip_failed_total`)
scrapeRequests = metrics.NewCounter(`vm_promscrape_scrape_requests_total`)
scrapeRetries = metrics.NewCounter(`vm_promscrape_scrape_retries_total`)
)
func doRequestWithPossibleRetry(ctx context.Context, hc *fasthttp.HostClient, req *fasthttp.Request, resp *fasthttp.Response) error {
scrapeRequests.Inc()
var reqErr error
// Return true if the request execution is completed and retry is not required
attempt := func() bool {
// Use DoCtx instead of Do in order to support context cancellation
reqErr = hc.DoCtx(ctx, req, resp)
if reqErr == nil {
statusCode := resp.StatusCode()
if statusCode != fasthttp.StatusTooManyRequests {
return true
}
} else if reqErr != fasthttp.ErrConnectionClosed && !strings.Contains(reqErr.Error(), "broken pipe") {
return true
}
return false
}
if attempt() {
return reqErr
}
// The first attempt was unsuccessful. Use exponential backoff for further attempts.
// Perform the second attempt immediately after the first attempt - this should help
// in cases when the remote side closes the keep-alive connection before the first attempt.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3293
sleepTime := time.Second
// It is expected that the deadline is already set to ctx, so the loop below
// should eventually finish if all the attempt() calls are unsuccessful.
for {
scrapeRetries.Inc()
if attempt() {
return reqErr
}
sleepTime += sleepTime
if !discoveryutils.SleepCtx(ctx, sleepTime) {
return reqErr
}
}
}
type streamReader struct {
r io.ReadCloser
cancel context.CancelFunc
bytesRead int64
scrapeURL string
maxBodySize int64
}
func (sr *streamReader) Read(p []byte) (int, error) {
n, err := sr.r.Read(p)
sr.bytesRead += int64(n)
if err == nil && sr.bytesRead > sr.maxBodySize {
maxScrapeSizeExceeded.Inc()
err = fmt.Errorf("the response from %q exceeds -promscrape.maxScrapeSize=%d; "+
"either reduce the response size for the target or increase -promscrape.maxScrapeSize", sr.scrapeURL, sr.maxBodySize)
}
return n, err
}
func (sr *streamReader) MustClose() {
sr.cancel()
if err := sr.r.Close(); err != nil {
logger.Errorf("cannot close reader: %s", err)
}
}

View file

@ -455,7 +455,6 @@ func newScraper(sw *ScrapeWork, group string, pushData func(at *auth.Token, wr *
sc.sw.Config = sw
sc.sw.ScrapeGroup = group
sc.sw.ReadData = c.ReadData
sc.sw.GetStreamReader = c.GetStreamReader
sc.sw.PushData = pushData
return sc, nil
}

View file

@ -4,7 +4,6 @@ import (
"bytes"
"flag"
"fmt"
"io"
"math"
"math/bits"
"strings"
@ -17,7 +16,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/leveledbytebufferpool"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
@ -186,11 +184,8 @@ type scrapeWork struct {
// Config for the scrape.
Config *ScrapeWork
// ReadData is called for reading the data.
ReadData func(dst []byte) ([]byte, error)
// GetStreamReader is called if Config.StreamParse is set.
GetStreamReader func() (*streamReader, error)
// ReadData is called for reading the scrape response data into dst.
ReadData func(dst *bytesutil.ByteBuffer) error
// PushData is called for pushing collected data.
PushData func(at *auth.Token, wr *prompbmarshal.WriteRequest)
@ -400,7 +395,10 @@ var (
pushDataDuration = metrics.NewHistogram("vm_promscrape_push_data_duration_seconds")
)
func (sw *scrapeWork) mustSwitchToStreamParseMode(responseSize int) bool {
func (sw *scrapeWork) needStreamParseMode(responseSize int) bool {
if *streamParse || sw.Config.StreamParse {
return true
}
if minResponseSizeForStreamParse.N <= 0 {
return false
}
@ -409,59 +407,61 @@ func (sw *scrapeWork) mustSwitchToStreamParseMode(responseSize int) bool {
// getTargetResponse() fetches response from sw target in the same way as when scraping the target.
func (sw *scrapeWork) getTargetResponse() ([]byte, error) {
// use stream reader when stream mode enabled
if *streamParse || sw.Config.StreamParse || sw.mustSwitchToStreamParseMode(sw.prevBodyLen) {
// Read the response in stream mode.
sr, err := sw.GetStreamReader()
if err != nil {
var bb bytesutil.ByteBuffer
if err := sw.ReadData(&bb); err != nil {
return nil, err
}
data, err := io.ReadAll(sr)
sr.MustClose()
return data, err
}
// Read the response in usual mode.
return sw.ReadData(nil)
return bb.B, nil
}
func (sw *scrapeWork) scrapeInternal(scrapeTimestamp, realTimestamp int64) error {
if *streamParse || sw.Config.StreamParse || sw.mustSwitchToStreamParseMode(sw.prevBodyLen) {
// Read data from scrape targets in streaming manner.
// This case is optimized for targets exposing more than ten thousand of metrics per target.
return sw.scrapeStream(scrapeTimestamp, realTimestamp)
body := leveledbytebufferpool.Get(sw.prevBodyLen)
// Read the scrape response into body.
// It is OK to do for stream parsing parsing mode, since the most of RAM
// is occupied during parsing of the read response body below.
// This also allows measuring the real scrape duration, which doesn't include
// the time needed for processing of the read response.
err := sw.ReadData(body)
// Measure scrape duration.
endTimestamp := time.Now().UnixNano() / 1e6
scrapeDurationSeconds := float64(endTimestamp-realTimestamp) / 1e3
scrapeDuration.Update(scrapeDurationSeconds)
scrapeResponseSize.Update(float64(len(body.B)))
// The code below is CPU-bound, while it may allocate big amounts of memory.
// That's why it is a good idea to limit the number of concurrent goroutines,
// which may execute this code, in order to limit memory usage under high load
// without sacrificing the performance.
processScrapedDataConcurrencyLimitCh <- struct{}{}
if err == nil && sw.needStreamParseMode(len(body.B)) {
// Process response body from scrape target in streaming manner.
// This case is optimized for targets exposing more than ten thousand of metrics per target,
// such as kube-state-metrics.
err = sw.processDataInStreamMode(scrapeTimestamp, realTimestamp, body, scrapeDurationSeconds)
} else {
// Process response body from scrape target at once.
// This case should work more optimally than stream parse for common case when scrape target exposes
// up to a few thousand metrics.
err = sw.processDataOneShot(scrapeTimestamp, realTimestamp, body.B, scrapeDurationSeconds, err)
}
// Common case: read all the data from scrape target to memory (body) and then process it.
// This case should work more optimally than stream parse code for common case when scrape target exposes
// up to a few thousand metrics.
body := leveledbytebufferpool.Get(sw.prevBodyLen)
var err error
body.B, err = sw.ReadData(body.B[:0])
releaseBody, err := sw.processScrapedData(scrapeTimestamp, realTimestamp, body, err)
if releaseBody {
<-processScrapedDataConcurrencyLimitCh
leveledbytebufferpool.Put(body)
}
return err
}
var processScrapedDataConcurrencyLimitCh = make(chan struct{}, cgroup.AvailableCPUs())
func (sw *scrapeWork) processScrapedData(scrapeTimestamp, realTimestamp int64, body *bytesutil.ByteBuffer, err error) (bool, error) {
// This function is CPU-bound, while it may allocate big amounts of memory.
// That's why it is a good idea to limit the number of concurrent calls to this function
// in order to limit memory usage under high load without sacrificing the performance.
processScrapedDataConcurrencyLimitCh <- struct{}{}
defer func() {
<-processScrapedDataConcurrencyLimitCh
}()
endTimestamp := time.Now().UnixNano() / 1e6
duration := float64(endTimestamp-realTimestamp) / 1e3
scrapeDuration.Update(duration)
scrapeResponseSize.Update(float64(len(body.B)))
func (sw *scrapeWork) processDataOneShot(scrapeTimestamp, realTimestamp int64, body []byte, scrapeDurationSeconds float64, err error) error {
up := 1
wc := writeRequestCtxPool.Get(sw.prevLabelsLen)
lastScrape := sw.loadLastScrape()
bodyString := bytesutil.ToUnsafeString(body.B)
bodyString := bytesutil.ToUnsafeString(body)
areIdenticalSeries := sw.areIdenticalSeries(lastScrape, bodyString)
if err != nil {
up = 0
@ -499,7 +499,7 @@ func (sw *scrapeWork) processScrapedData(scrapeTimestamp, realTimestamp int64, b
}
am := &autoMetrics{
up: up,
scrapeDurationSeconds: duration,
scrapeDurationSeconds: scrapeDurationSeconds,
samplesScraped: samplesScraped,
samplesPostRelabeling: samplesPostRelabeling,
seriesAdded: seriesAdded,
@ -510,85 +510,35 @@ func (sw *scrapeWork) processScrapedData(scrapeTimestamp, realTimestamp int64, b
sw.prevLabelsLen = len(wc.labels)
sw.prevBodyLen = len(bodyString)
wc.reset()
mustSwitchToStreamParse := sw.mustSwitchToStreamParseMode(len(bodyString))
if !mustSwitchToStreamParse {
// Return wc to the pool if the parsed response size was smaller than -promscrape.minResponseSizeForStreamParse
// This should reduce memory usage when scraping targets with big responses.
writeRequestCtxPool.Put(wc)
}
// body must be released only after wc is released, since wc refers to body.
if !areIdenticalSeries {
// Send stale markers for disappeared metrics with the real scrape timestamp
// in order to guarantee that query doesn't return data after this time for the disappeared metrics.
sw.sendStaleSeries(lastScrape, bodyString, realTimestamp, false)
sw.storeLastScrape(body.B)
sw.storeLastScrape(body)
}
sw.finalizeLastScrape()
tsmGlobal.Update(sw, up == 1, realTimestamp, int64(duration*1000), samplesScraped, err)
return !mustSwitchToStreamParse, err
tsmGlobal.Update(sw, up == 1, realTimestamp, int64(scrapeDurationSeconds*1000), samplesScraped, err)
return err
}
func (sw *scrapeWork) pushData(at *auth.Token, wr *prompbmarshal.WriteRequest) {
startTime := time.Now()
sw.PushData(at, wr)
pushDataDuration.UpdateDuration(startTime)
}
type streamBodyReader struct {
body []byte
bodyLen int
readOffset int
}
func (sbr *streamBodyReader) Init(sr *streamReader) error {
sbr.body = nil
sbr.bodyLen = 0
sbr.readOffset = 0
// Read the whole response body in memory before parsing it in stream mode.
// This minimizes the time needed for reading response body from scrape target.
startTime := fasttime.UnixTimestamp()
body, err := io.ReadAll(sr)
if err != nil {
d := fasttime.UnixTimestamp() - startTime
return fmt.Errorf("cannot read stream body in %d seconds: %w", d, err)
}
sbr.body = body
sbr.bodyLen = len(body)
return nil
}
func (sbr *streamBodyReader) Read(b []byte) (int, error) {
if sbr.readOffset >= len(sbr.body) {
return 0, io.EOF
}
n := copy(b, sbr.body[sbr.readOffset:])
sbr.readOffset += n
return n, nil
}
func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
func (sw *scrapeWork) processDataInStreamMode(scrapeTimestamp, realTimestamp int64, body *bytesutil.ByteBuffer, scrapeDurationSeconds float64) error {
samplesScraped := 0
samplesPostRelabeling := 0
wc := writeRequestCtxPool.Get(sw.prevLabelsLen)
// Do not pool sbr and do not pre-allocate sbr.body in order to reduce memory usage when scraping big responses.
var sbr streamBodyReader
lastScrape := sw.loadLastScrape()
bodyString := ""
areIdenticalSeries := true
bodyString := bytesutil.ToUnsafeString(body.B)
areIdenticalSeries := sw.areIdenticalSeries(lastScrape, bodyString)
samplesDropped := 0
sr, err := sw.GetStreamReader()
if err != nil {
err = fmt.Errorf("cannot read data: %w", err)
} else {
r := body.NewReader()
var mu sync.Mutex
err = sbr.Init(sr)
if err == nil {
bodyString = bytesutil.ToUnsafeString(sbr.body)
areIdenticalSeries = sw.areIdenticalSeries(lastScrape, bodyString)
err = stream.Parse(&sbr, scrapeTimestamp, false, false, func(rows []parser.Row) error {
err := stream.Parse(r, scrapeTimestamp, false, false, func(rows []parser.Row) error {
mu.Lock()
defer mu.Unlock()
samplesScraped += len(rows)
for i := range rows {
sw.addRowToTimeseries(wc, &rows[i], scrapeTimestamp, true)
@ -603,6 +553,7 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
if sw.seriesLimitExceeded || !areIdenticalSeries {
samplesDropped += sw.applySeriesLimit(wc)
}
// Push the collected rows to sw before returning from the callback, since they cannot be held
// after returning from the callback - this will result in data race.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825#issuecomment-723198247
@ -610,15 +561,8 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
wc.resetNoRows()
return nil
}, sw.logError)
}
sr.MustClose()
}
scrapedSamples.Update(float64(samplesScraped))
endTimestamp := time.Now().UnixNano() / 1e6
duration := float64(endTimestamp-realTimestamp) / 1e3
scrapeDuration.Update(duration)
scrapeResponseSize.Update(float64(sbr.bodyLen))
up := 1
if err != nil {
// Mark the scrape as failed even if it already read and pushed some samples
@ -635,7 +579,7 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
}
am := &autoMetrics{
up: up,
scrapeDurationSeconds: duration,
scrapeDurationSeconds: scrapeDurationSeconds,
samplesScraped: samplesScraped,
samplesPostRelabeling: samplesPostRelabeling,
seriesAdded: seriesAdded,
@ -644,22 +588,28 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
sw.addAutoMetrics(am, wc, scrapeTimestamp)
sw.pushData(sw.Config.AuthToken, &wc.writeRequest)
sw.prevLabelsLen = len(wc.labels)
sw.prevBodyLen = sbr.bodyLen
sw.prevBodyLen = len(bodyString)
wc.reset()
writeRequestCtxPool.Put(wc)
if !areIdenticalSeries {
// Send stale markers for disappeared metrics with the real scrape timestamp
// in order to guarantee that query doesn't return data after this time for the disappeared metrics.
sw.sendStaleSeries(lastScrape, bodyString, realTimestamp, false)
sw.storeLastScrape(sbr.body)
sw.storeLastScrape(body.B)
}
sw.finalizeLastScrape()
tsmGlobal.Update(sw, up == 1, realTimestamp, int64(duration*1000), samplesScraped, err)
tsmGlobal.Update(sw, up == 1, realTimestamp, int64(scrapeDurationSeconds*1000), samplesScraped, err)
// Do not track active series in streaming mode, since this may need too big amounts of memory
// when the target exports too big number of metrics.
return err
}
func (sw *scrapeWork) pushData(at *auth.Token, wr *prompbmarshal.WriteRequest) {
startTime := time.Now()
sw.PushData(at, wr)
pushDataDuration.UpdateDuration(startTime)
}
func (sw *scrapeWork) areIdenticalSeries(prevData, currData string) bool {
if sw.Config.NoStaleMarkers && sw.Config.SeriesLimit <= 0 {
// Do not spend CPU time on tracking the changes in series if stale markers are disabled.

View file

@ -7,6 +7,7 @@ import (
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
@ -89,9 +90,9 @@ func TestScrapeWorkScrapeInternalFailure(t *testing.T) {
}
readDataCalls := 0
sw.ReadData = func(dst []byte) ([]byte, error) {
sw.ReadData = func(dst *bytesutil.ByteBuffer) error {
readDataCalls++
return dst, fmt.Errorf("error when reading data")
return fmt.Errorf("error when reading data")
}
pushDataCalls := 0
@ -130,10 +131,10 @@ func TestScrapeWorkScrapeInternalSuccess(t *testing.T) {
sw.Config = cfg
readDataCalls := 0
sw.ReadData = func(dst []byte) ([]byte, error) {
sw.ReadData = func(dst *bytesutil.ByteBuffer) error {
readDataCalls++
dst = append(dst, data...)
return dst, nil
dst.B = append(dst.B, data...)
return nil
}
pushDataCalls := 0

View file

@ -5,6 +5,7 @@ import (
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
)
@ -73,8 +74,9 @@ vm_tcplistener_read_timeouts_total{name="https", addr=":443"} 12353
vm_tcplistener_write_calls_total{name="http", addr=":80"} 3996
vm_tcplistener_write_calls_total{name="https", addr=":443"} 132356
`
readDataFunc := func(dst []byte) ([]byte, error) {
return append(dst, data...), nil
readDataFunc := func(dst *bytesutil.ByteBuffer) error {
dst.B = append(dst.B, data...)
return nil
}
b.ReportAllocs()
b.SetBytes(int64(len(data)))

View file

@ -11,9 +11,6 @@ import (
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
"github.com/VictoriaMetrics/fasthttp"
"github.com/VictoriaMetrics/metrics"
)
@ -52,30 +49,6 @@ var (
stdDialerOnce sync.Once
)
func newStatDialFunc(proxyURL *proxy.URL, ac *promauth.Config) (fasthttp.DialFunc, error) {
dialFunc, err := proxyURL.NewDialFunc(ac)
if err != nil {
return nil, err
}
statDialFunc := func(addr string) (net.Conn, error) {
conn, err := dialFunc(addr)
dialsTotal.Inc()
if err != nil {
dialErrors.Inc()
if !netutil.TCP6Enabled() && !isTCPv4Addr(addr) {
err = fmt.Errorf("%w; try -enableTCP6 command-line flag if you scrape ipv6 addresses", err)
}
return nil, err
}
conns.Inc()
sc := &statConn{
Conn: conn,
}
return sc, nil
}
return statDialFunc, nil
}
var (
dialsTotal = metrics.NewCounter(`vm_promscrape_dials_total`)
dialErrors = metrics.NewCounter(`vm_promscrape_dial_errors_total`)

View file

@ -1,21 +1,13 @@
package proxy
import (
"bufio"
"crypto/tls"
"encoding/base64"
"fmt"
"net"
"net/http"
"net/url"
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/fasthttp"
"golang.org/x/net/proxy"
)
var validURLSchemes = []string{"http", "https", "socks5", "tls+socks5"}
@ -84,18 +76,6 @@ func (u *URL) SetHeaders(ac *promauth.Config, req *http.Request) error {
return ac.SetHeaders(req, false)
}
// SetFasthttpHeaders sets headers to req according to u and ac configs.
func (u *URL) SetFasthttpHeaders(ac *promauth.Config, req *fasthttp.Request) error {
ah, err := u.getAuthHeader(ac)
if err != nil {
return fmt.Errorf("cannot obtain Proxy-Authorization headers: %w", err)
}
if ah != "" {
req.Header.Set("Proxy-Authorization", ah)
}
return ac.SetFasthttpHeaders(req, false)
}
// getAuthHeader returns Proxy-Authorization auth header for the given u and ac.
func (u *URL) getAuthHeader(ac *promauth.Config) (string, error) {
authHeader := ""
@ -141,136 +121,3 @@ func (u *URL) UnmarshalYAML(unmarshal func(interface{}) error) error {
u.URL = parsedURL
return nil
}
// NewDialFunc returns dial func for the given u and ac.
func (u *URL) NewDialFunc(ac *promauth.Config) (fasthttp.DialFunc, error) {
if u == nil || u.URL == nil {
return defaultDialFunc, nil
}
pu := u.URL
if !isURLSchemeValid(pu.Scheme) {
return nil, fmt.Errorf("unknown scheme=%q for proxy_url=%q, must be in %s", pu.Scheme, pu.Redacted(), validURLSchemes)
}
isTLS := (pu.Scheme == "https" || pu.Scheme == "tls+socks5")
proxyAddr := addMissingPort(pu.Host, isTLS)
var tlsCfg *tls.Config
if isTLS {
var err error
tlsCfg, err = ac.NewTLSConfig()
if err != nil {
return nil, fmt.Errorf("cannot initialize tls config: %w", err)
}
if !tlsCfg.InsecureSkipVerify && tlsCfg.ServerName == "" {
tlsCfg.ServerName = tlsServerName(proxyAddr)
}
}
if pu.Scheme == "socks5" || pu.Scheme == "tls+socks5" {
return socks5DialFunc(proxyAddr, pu, tlsCfg)
}
dialFunc := func(addr string) (net.Conn, error) {
proxyConn, err := defaultDialFunc(proxyAddr)
if err != nil {
return nil, fmt.Errorf("cannot connect to proxy %q: %w", pu.Redacted(), err)
}
if isTLS {
proxyConn = tls.Client(proxyConn, tlsCfg)
}
authHeader, err := u.getAuthHeader(ac)
if err != nil {
return nil, fmt.Errorf("cannot obtain Proxy-Authorization header: %w", err)
}
if authHeader != "" {
authHeader = "Proxy-Authorization: " + authHeader + "\r\n"
authHeader += ac.HeadersNoAuthString()
}
conn, err := sendConnectRequest(proxyConn, proxyAddr, addr, authHeader)
if err != nil {
_ = proxyConn.Close()
return nil, fmt.Errorf("error when sending CONNECT request to proxy %q: %w", pu.Redacted(), err)
}
return conn, nil
}
return dialFunc, nil
}
func socks5DialFunc(proxyAddr string, pu *url.URL, tlsCfg *tls.Config) (fasthttp.DialFunc, error) {
var sac *proxy.Auth
if pu.User != nil {
username := pu.User.Username()
password, _ := pu.User.Password()
sac = &proxy.Auth{
User: username,
Password: password,
}
}
network := netutil.GetTCPNetwork()
var dialer proxy.Dialer = proxy.Direct
if tlsCfg != nil {
dialer = &tls.Dialer{
Config: tlsCfg,
}
}
d, err := proxy.SOCKS5(network, proxyAddr, sac, dialer)
if err != nil {
return nil, fmt.Errorf("cannot create socks5 proxy for url: %s, err: %w", pu.Redacted(), err)
}
dialFunc := func(addr string) (net.Conn, error) {
return d.Dial(network, addr)
}
return dialFunc, nil
}
func addMissingPort(addr string, isTLS bool) string {
if strings.IndexByte(addr, ':') >= 0 {
return addr
}
port := "80"
if isTLS {
port = "443"
}
return addr + ":" + port
}
func tlsServerName(addr string) string {
host, _, err := net.SplitHostPort(addr)
if err != nil {
return addr
}
return host
}
func defaultDialFunc(addr string) (net.Conn, error) {
network := netutil.GetTCPNetwork()
// Do not use fasthttp.Dial because of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/987
return net.DialTimeout(network, addr, 5*time.Second)
}
// sendConnectRequest sends CONNECT request to proxyConn for the given addr and authHeader and returns the established connection to dstAddr.
func sendConnectRequest(proxyConn net.Conn, proxyAddr, dstAddr, authHeader string) (net.Conn, error) {
req := "CONNECT " + dstAddr + " HTTP/1.1\r\nHost: " + proxyAddr + "\r\n" + authHeader + "\r\n"
if _, err := proxyConn.Write([]byte(req)); err != nil {
return nil, fmt.Errorf("cannot send CONNECT request for dstAddr=%q: %w", dstAddr, err)
}
var res fasthttp.Response
res.SkipBody = true
conn := &bufferedReaderConn{
br: bufio.NewReader(proxyConn),
Conn: proxyConn,
}
if err := res.Read(conn.br); err != nil {
return nil, fmt.Errorf("cannot read CONNECT response for dstAddr=%q: %w", dstAddr, err)
}
if statusCode := res.Header.StatusCode(); statusCode != 200 {
return nil, fmt.Errorf("unexpected status code received: %d; want: 200; response body: %q", statusCode, res.Body())
}
return conn, nil
}
type bufferedReaderConn struct {
net.Conn
br *bufio.Reader
}
func (brc *bufferedReaderConn) Read(p []byte) (int, error) {
return brc.br.Read(p)
}

View file

@ -1,3 +0,0 @@
tags
*.pprof
*.fasthttp.gz

View file

@ -1,16 +0,0 @@
language: go
go:
- 1.9.x
- 1.8.x
script:
# build test for supported platforms
- GOOS=linux go build
- GOOS=darwin go build
- GOOS=freebsd go build
- GOOS=windows go build
- GOARCH=386 go build
# run tests on a standard platform
- go test -v ./...

View file

@ -1,22 +0,0 @@
The MIT License (MIT)
Copyright (c) 2015-2016 Aliaksandr Valialkin, VertaMedia
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View file

@ -1,5 +0,0 @@
Private copy of [fasthttp](https://github.com/valyala/fasthttp) for VictoriaMetrics usage.
It contains only the functionality required for [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics).
Do not use it in your own projects!

View file

@ -1,4 +0,0 @@
- SessionClient with referer and cookies support.
- ProxyHandler similar to FSHandler.
- WebSockets. See https://tools.ietf.org/html/rfc6455 .
- HTTP/2.0. See https://tools.ietf.org/html/rfc7540 .

View file

@ -1,517 +0,0 @@
package fasthttp
import (
"bytes"
"errors"
"io"
"sync"
)
// AcquireArgs returns an empty Args object from the pool.
//
// The returned Args may be returned to the pool with ReleaseArgs
// when no longer needed. This allows reducing GC load.
func AcquireArgs() *Args {
return argsPool.Get().(*Args)
}
// ReleaseArgs returns the object acquired via AquireArgs to the pool.
//
// Do not access the released Args object, otherwise data races may occur.
func ReleaseArgs(a *Args) {
a.Reset()
argsPool.Put(a)
}
var argsPool = &sync.Pool{
New: func() interface{} {
return &Args{}
},
}
// Args represents query arguments.
//
// It is forbidden copying Args instances. Create new instances instead
// and use CopyTo().
//
// Args instance MUST NOT be used from concurrently running goroutines.
type Args struct {
noCopy noCopy
args []argsKV
buf []byte
}
type argsKV struct {
key []byte
value []byte
}
// Reset clears query args.
func (a *Args) Reset() {
a.args = a.args[:0]
}
// CopyTo copies all args to dst.
func (a *Args) CopyTo(dst *Args) {
dst.Reset()
dst.args = copyArgs(dst.args, a.args)
}
// VisitAll calls f for each existing arg.
//
// f must not retain references to key and value after returning.
// Make key and/or value copies if you need storing them after returning.
func (a *Args) VisitAll(f func(key, value []byte)) {
visitArgs(a.args, f)
}
// Len returns the number of query args.
func (a *Args) Len() int {
return len(a.args)
}
// Parse parses the given string containing query args.
func (a *Args) Parse(s string) {
a.buf = append(a.buf[:0], s...)
a.ParseBytes(a.buf)
}
// ParseBytes parses the given b containing query args.
func (a *Args) ParseBytes(b []byte) {
a.Reset()
var s argsScanner
s.b = b
var kv *argsKV
a.args, kv = allocArg(a.args)
for s.next(kv) {
if len(kv.key) > 0 || len(kv.value) > 0 {
a.args, kv = allocArg(a.args)
}
}
a.args = releaseArg(a.args)
}
// String returns string representation of query args.
func (a *Args) String() string {
return string(a.QueryString())
}
// QueryString returns query string for the args.
//
// The returned value is valid until the next call to Args methods.
func (a *Args) QueryString() []byte {
a.buf = a.AppendBytes(a.buf[:0])
return a.buf
}
// AppendBytes appends query string to dst and returns the extended dst.
func (a *Args) AppendBytes(dst []byte) []byte {
for i, n := 0, len(a.args); i < n; i++ {
kv := &a.args[i]
dst = AppendQuotedArg(dst, kv.key)
if len(kv.value) > 0 {
dst = append(dst, '=')
dst = AppendQuotedArg(dst, kv.value)
}
if i+1 < n {
dst = append(dst, '&')
}
}
return dst
}
// WriteTo writes query string to w.
//
// WriteTo implements io.WriterTo interface.
func (a *Args) WriteTo(w io.Writer) (int64, error) {
n, err := w.Write(a.QueryString())
return int64(n), err
}
// Del deletes argument with the given key from query args.
func (a *Args) Del(key string) {
a.args = delAllArgs(a.args, key)
}
// DelBytes deletes argument with the given key from query args.
func (a *Args) DelBytes(key []byte) {
a.args = delAllArgs(a.args, b2s(key))
}
// Add adds 'key=value' argument.
//
// Multiple values for the same key may be added.
func (a *Args) Add(key, value string) {
a.args = appendArg(a.args, key, value)
}
// AddBytesK adds 'key=value' argument.
//
// Multiple values for the same key may be added.
func (a *Args) AddBytesK(key []byte, value string) {
a.args = appendArg(a.args, b2s(key), value)
}
// AddBytesV adds 'key=value' argument.
//
// Multiple values for the same key may be added.
func (a *Args) AddBytesV(key string, value []byte) {
a.args = appendArg(a.args, key, b2s(value))
}
// AddBytesKV adds 'key=value' argument.
//
// Multiple values for the same key may be added.
func (a *Args) AddBytesKV(key, value []byte) {
a.args = appendArg(a.args, b2s(key), b2s(value))
}
// Set sets 'key=value' argument.
func (a *Args) Set(key, value string) {
a.args = setArg(a.args, key, value)
}
// SetBytesK sets 'key=value' argument.
func (a *Args) SetBytesK(key []byte, value string) {
a.args = setArg(a.args, b2s(key), value)
}
// SetBytesV sets 'key=value' argument.
func (a *Args) SetBytesV(key string, value []byte) {
a.args = setArg(a.args, key, b2s(value))
}
// SetBytesKV sets 'key=value' argument.
func (a *Args) SetBytesKV(key, value []byte) {
a.args = setArgBytes(a.args, key, value)
}
// Peek returns query arg value for the given key.
//
// Returned value is valid until the next Args call.
func (a *Args) Peek(key string) []byte {
return peekArgStr(a.args, key)
}
// PeekBytes returns query arg value for the given key.
//
// Returned value is valid until the next Args call.
func (a *Args) PeekBytes(key []byte) []byte {
return peekArgBytes(a.args, key)
}
// PeekMulti returns all the arg values for the given key.
func (a *Args) PeekMulti(key string) [][]byte {
var values [][]byte
a.VisitAll(func(k, v []byte) {
if string(k) == key {
values = append(values, v)
}
})
return values
}
// PeekMultiBytes returns all the arg values for the given key.
func (a *Args) PeekMultiBytes(key []byte) [][]byte {
return a.PeekMulti(b2s(key))
}
// Has returns true if the given key exists in Args.
func (a *Args) Has(key string) bool {
return hasArg(a.args, key)
}
// HasBytes returns true if the given key exists in Args.
func (a *Args) HasBytes(key []byte) bool {
return hasArg(a.args, b2s(key))
}
// ErrNoArgValue is returned when Args value with the given key is missing.
var ErrNoArgValue = errors.New("no Args value for the given key")
// GetUint returns uint value for the given key.
func (a *Args) GetUint(key string) (int, error) {
value := a.Peek(key)
if len(value) == 0 {
return -1, ErrNoArgValue
}
return ParseUint(value)
}
// SetUint sets uint value for the given key.
func (a *Args) SetUint(key string, value int) {
bb := AcquireByteBuffer()
bb.B = AppendUint(bb.B[:0], value)
a.SetBytesV(key, bb.B)
ReleaseByteBuffer(bb)
}
// SetUintBytes sets uint value for the given key.
func (a *Args) SetUintBytes(key []byte, value int) {
a.SetUint(b2s(key), value)
}
// GetUintOrZero returns uint value for the given key.
//
// Zero (0) is returned on error.
func (a *Args) GetUintOrZero(key string) int {
n, err := a.GetUint(key)
if err != nil {
n = 0
}
return n
}
// GetUfloat returns ufloat value for the given key.
func (a *Args) GetUfloat(key string) (float64, error) {
value := a.Peek(key)
if len(value) == 0 {
return -1, ErrNoArgValue
}
return ParseUfloat(value)
}
// GetUfloatOrZero returns ufloat value for the given key.
//
// Zero (0) is returned on error.
func (a *Args) GetUfloatOrZero(key string) float64 {
f, err := a.GetUfloat(key)
if err != nil {
f = 0
}
return f
}
// GetBool returns boolean value for the given key.
//
// true is returned for '1', 'y' and 'yes' values,
// otherwise false is returned.
func (a *Args) GetBool(key string) bool {
switch string(a.Peek(key)) {
case "1", "y", "yes":
return true
default:
return false
}
}
func visitArgs(args []argsKV, f func(k, v []byte)) {
for i, n := 0, len(args); i < n; i++ {
kv := &args[i]
f(kv.key, kv.value)
}
}
func copyArgs(dst, src []argsKV) []argsKV {
if cap(dst) < len(src) {
tmp := make([]argsKV, len(src))
copy(tmp, dst)
dst = tmp
}
n := len(src)
dst = dst[:n]
for i := 0; i < n; i++ {
dstKV := &dst[i]
srcKV := &src[i]
dstKV.key = append(dstKV.key[:0], srcKV.key...)
dstKV.value = append(dstKV.value[:0], srcKV.value...)
}
return dst
}
func delAllArgsBytes(args []argsKV, key []byte) []argsKV {
return delAllArgs(args, b2s(key))
}
func delAllArgs(args []argsKV, key string) []argsKV {
for i, n := 0, len(args); i < n; i++ {
kv := &args[i]
if key == string(kv.key) {
tmp := *kv
copy(args[i:], args[i+1:])
n--
args[n] = tmp
args = args[:n]
}
}
return args
}
func setArgBytes(h []argsKV, key, value []byte) []argsKV {
return setArg(h, b2s(key), b2s(value))
}
func setArg(h []argsKV, key, value string) []argsKV {
n := len(h)
for i := 0; i < n; i++ {
kv := &h[i]
if key == string(kv.key) {
kv.value = append(kv.value[:0], value...)
return h
}
}
return appendArg(h, key, value)
}
func appendArgBytes(h []argsKV, key, value []byte) []argsKV {
return appendArg(h, b2s(key), b2s(value))
}
func appendArg(args []argsKV, key, value string) []argsKV {
var kv *argsKV
args, kv = allocArg(args)
kv.key = append(kv.key[:0], key...)
kv.value = append(kv.value[:0], value...)
return args
}
func allocArg(h []argsKV) ([]argsKV, *argsKV) {
n := len(h)
if cap(h) > n {
h = h[:n+1]
} else {
h = append(h, argsKV{})
}
return h, &h[n]
}
func releaseArg(h []argsKV) []argsKV {
return h[:len(h)-1]
}
func hasArg(h []argsKV, key string) bool {
for i, n := 0, len(h); i < n; i++ {
kv := &h[i]
if key == string(kv.key) {
return true
}
}
return false
}
func peekArgBytes(h []argsKV, k []byte) []byte {
for i, n := 0, len(h); i < n; i++ {
kv := &h[i]
if bytes.Equal(kv.key, k) {
return kv.value
}
}
return nil
}
func peekArgStr(h []argsKV, k string) []byte {
for i, n := 0, len(h); i < n; i++ {
kv := &h[i]
if string(kv.key) == k {
return kv.value
}
}
return nil
}
type argsScanner struct {
b []byte
}
func (s *argsScanner) next(kv *argsKV) bool {
if len(s.b) == 0 {
return false
}
isKey := true
k := 0
for i, c := range s.b {
switch c {
case '=':
if isKey {
isKey = false
kv.key = decodeArgAppend(kv.key[:0], s.b[:i])
k = i + 1
}
case '&':
if isKey {
kv.key = decodeArgAppend(kv.key[:0], s.b[:i])
kv.value = kv.value[:0]
} else {
kv.value = decodeArgAppend(kv.value[:0], s.b[k:i])
}
s.b = s.b[i+1:]
return true
}
}
if isKey {
kv.key = decodeArgAppend(kv.key[:0], s.b)
kv.value = kv.value[:0]
} else {
kv.value = decodeArgAppend(kv.value[:0], s.b[k:])
}
s.b = s.b[len(s.b):]
return true
}
func decodeArgAppend(dst, src []byte) []byte {
if bytes.IndexByte(src, '%') < 0 && bytes.IndexByte(src, '+') < 0 {
// fast path: src doesn't contain encoded chars
return append(dst, src...)
}
// slow path
for i := 0; i < len(src); i++ {
c := src[i]
if c == '%' {
if i+2 >= len(src) {
return append(dst, src[i:]...)
}
x2 := hex2intTable[src[i+2]]
x1 := hex2intTable[src[i+1]]
if x1 == 16 || x2 == 16 {
dst = append(dst, '%')
} else {
dst = append(dst, x1<<4|x2)
i += 2
}
} else if c == '+' {
dst = append(dst, ' ')
} else {
dst = append(dst, c)
}
}
return dst
}
// decodeArgAppendNoPlus is almost identical to decodeArgAppend, but it doesn't
// substitute '+' with ' '.
//
// The function is copy-pasted from decodeArgAppend due to the preformance
// reasons only.
func decodeArgAppendNoPlus(dst, src []byte) []byte {
if bytes.IndexByte(src, '%') < 0 {
// fast path: src doesn't contain encoded chars
return append(dst, src...)
}
// slow path
for i := 0; i < len(src); i++ {
c := src[i]
if c == '%' {
if i+2 >= len(src) {
return append(dst, src[i:]...)
}
x2 := hex2intTable[src[i+2]]
x1 := hex2intTable[src[i+1]]
if x1 == 16 || x2 == 16 {
dst = append(dst, '%')
} else {
dst = append(dst, x1<<4|x2)
i += 2
}
} else {
dst = append(dst, c)
}
}
return dst
}

View file

@ -1,64 +0,0 @@
package fasthttp
import (
"github.com/valyala/bytebufferpool"
)
// ByteBuffer provides byte buffer, which can be used with fasthttp API
// in order to minimize memory allocations.
//
// ByteBuffer may be used with functions appending data to the given []byte
// slice. See example code for details.
//
// Use AcquireByteBuffer for obtaining an empty byte buffer.
//
// ByteBuffer is deprecated. Use github.com/valyala/bytebufferpool instead.
type ByteBuffer bytebufferpool.ByteBuffer
// Write implements io.Writer - it appends p to ByteBuffer.B
func (b *ByteBuffer) Write(p []byte) (int, error) {
return bb(b).Write(p)
}
// WriteString appends s to ByteBuffer.B
func (b *ByteBuffer) WriteString(s string) (int, error) {
return bb(b).WriteString(s)
}
// Set sets ByteBuffer.B to p
func (b *ByteBuffer) Set(p []byte) {
bb(b).Set(p)
}
// SetString sets ByteBuffer.B to s
func (b *ByteBuffer) SetString(s string) {
bb(b).SetString(s)
}
// Reset makes ByteBuffer.B empty.
func (b *ByteBuffer) Reset() {
bb(b).Reset()
}
// AcquireByteBuffer returns an empty byte buffer from the pool.
//
// Acquired byte buffer may be returned to the pool via ReleaseByteBuffer call.
// This reduces the number of memory allocations required for byte buffer
// management.
func AcquireByteBuffer() *ByteBuffer {
return (*ByteBuffer)(defaultByteBufferPool.Get())
}
// ReleaseByteBuffer returns byte buffer to the pool.
//
// ByteBuffer.B mustn't be touched after returning it to the pool.
// Otherwise data races occur.
func ReleaseByteBuffer(b *ByteBuffer) {
defaultByteBufferPool.Put(bb(b))
}
func bb(b *ByteBuffer) *bytebufferpool.ByteBuffer {
return (*bytebufferpool.ByteBuffer)(b)
}
var defaultByteBufferPool bytebufferpool.Pool

View file

@ -1,446 +0,0 @@
package fasthttp
import (
"bufio"
"bytes"
"errors"
"fmt"
"io"
"math"
"net"
"reflect"
"strings"
"sync"
"time"
"unsafe"
)
// AppendHTMLEscape appends html-escaped s to dst and returns the extended dst.
func AppendHTMLEscape(dst []byte, s string) []byte {
if strings.IndexByte(s, '<') < 0 &&
strings.IndexByte(s, '>') < 0 &&
strings.IndexByte(s, '"') < 0 &&
strings.IndexByte(s, '\'') < 0 {
// fast path - nothing to escape
return append(dst, s...)
}
// slow path
var prev int
var sub string
for i, n := 0, len(s); i < n; i++ {
sub = ""
switch s[i] {
case '<':
sub = "&lt;"
case '>':
sub = "&gt;"
case '"':
sub = "&quot;"
case '\'':
sub = "&#39;"
}
if len(sub) > 0 {
dst = append(dst, s[prev:i]...)
dst = append(dst, sub...)
prev = i + 1
}
}
return append(dst, s[prev:]...)
}
// AppendHTMLEscapeBytes appends html-escaped s to dst and returns
// the extended dst.
func AppendHTMLEscapeBytes(dst, s []byte) []byte {
return AppendHTMLEscape(dst, b2s(s))
}
// AppendIPv4 appends string representation of the given ip v4 to dst
// and returns the extended dst.
func AppendIPv4(dst []byte, ip net.IP) []byte {
ip = ip.To4()
if ip == nil {
return append(dst, "non-v4 ip passed to AppendIPv4"...)
}
dst = AppendUint(dst, int(ip[0]))
for i := 1; i < 4; i++ {
dst = append(dst, '.')
dst = AppendUint(dst, int(ip[i]))
}
return dst
}
var errEmptyIPStr = errors.New("empty ip address string")
// ParseIPv4 parses ip address from ipStr into dst and returns the extended dst.
func ParseIPv4(dst net.IP, ipStr []byte) (net.IP, error) {
if len(ipStr) == 0 {
return dst, errEmptyIPStr
}
if len(dst) < net.IPv4len {
dst = make([]byte, net.IPv4len)
}
copy(dst, net.IPv4zero)
dst = dst.To4()
if dst == nil {
panic("BUG: dst must not be nil")
}
b := ipStr
for i := 0; i < 3; i++ {
n := bytes.IndexByte(b, '.')
if n < 0 {
return dst, fmt.Errorf("cannot find dot in ipStr %q", ipStr)
}
v, err := ParseUint(b[:n])
if err != nil {
return dst, fmt.Errorf("cannot parse ipStr %q: %s", ipStr, err)
}
if v > 255 {
return dst, fmt.Errorf("cannot parse ipStr %q: ip part cannot exceed 255: parsed %d", ipStr, v)
}
dst[i] = byte(v)
b = b[n+1:]
}
v, err := ParseUint(b)
if err != nil {
return dst, fmt.Errorf("cannot parse ipStr %q: %s", ipStr, err)
}
if v > 255 {
return dst, fmt.Errorf("cannot parse ipStr %q: ip part cannot exceed 255: parsed %d", ipStr, v)
}
dst[3] = byte(v)
return dst, nil
}
// AppendHTTPDate appends HTTP-compliant (RFC1123) representation of date
// to dst and returns the extended dst.
func AppendHTTPDate(dst []byte, date time.Time) []byte {
dst = date.In(time.UTC).AppendFormat(dst, time.RFC1123)
copy(dst[len(dst)-3:], strGMT)
return dst
}
// ParseHTTPDate parses HTTP-compliant (RFC1123) date.
func ParseHTTPDate(date []byte) (time.Time, error) {
return time.Parse(time.RFC1123, b2s(date))
}
// AppendUint appends n to dst and returns the extended dst.
func AppendUint(dst []byte, n int) []byte {
if n < 0 {
panic("BUG: int must be positive")
}
var b [20]byte
buf := b[:]
i := len(buf)
var q int
for n >= 10 {
i--
q = n / 10
buf[i] = '0' + byte(n-q*10)
n = q
}
i--
buf[i] = '0' + byte(n)
dst = append(dst, buf[i:]...)
return dst
}
// ParseUint parses uint from buf.
func ParseUint(buf []byte) (int, error) {
v, n, err := parseUintBuf(buf)
if n != len(buf) {
return -1, errUnexpectedTrailingChar
}
return v, err
}
var (
errEmptyInt = errors.New("empty integer")
errUnexpectedFirstChar = errors.New("unexpected first char found. Expecting 0-9")
errUnexpectedTrailingChar = errors.New("unexpected traling char found. Expecting 0-9")
errTooLongInt = errors.New("too long int")
)
func parseUintBuf(b []byte) (int, int, error) {
n := len(b)
if n == 0 {
return -1, 0, errEmptyInt
}
v := 0
for i := 0; i < n; i++ {
c := b[i]
k := c - '0'
if k > 9 {
if i == 0 {
return -1, i, errUnexpectedFirstChar
}
return v, i, nil
}
if i >= maxIntChars {
return -1, i, errTooLongInt
}
v = 10*v + int(k)
}
return v, n, nil
}
var (
errEmptyFloat = errors.New("empty float number")
errDuplicateFloatPoint = errors.New("duplicate point found in float number")
errUnexpectedFloatEnd = errors.New("unexpected end of float number")
errInvalidFloatExponent = errors.New("invalid float number exponent")
errUnexpectedFloatChar = errors.New("unexpected char found in float number")
)
// ParseUfloat parses unsigned float from buf.
func ParseUfloat(buf []byte) (float64, error) {
if len(buf) == 0 {
return -1, errEmptyFloat
}
b := buf
var v uint64
var offset = 1.0
var pointFound bool
for i, c := range b {
if c < '0' || c > '9' {
if c == '.' {
if pointFound {
return -1, errDuplicateFloatPoint
}
pointFound = true
continue
}
if c == 'e' || c == 'E' {
if i+1 >= len(b) {
return -1, errUnexpectedFloatEnd
}
b = b[i+1:]
minus := -1
switch b[0] {
case '+':
b = b[1:]
minus = 1
case '-':
b = b[1:]
default:
minus = 1
}
vv, err := ParseUint(b)
if err != nil {
return -1, errInvalidFloatExponent
}
return float64(v) * offset * math.Pow10(minus*int(vv)), nil
}
return -1, errUnexpectedFloatChar
}
v = 10*v + uint64(c-'0')
if pointFound {
offset /= 10
}
}
return float64(v) * offset, nil
}
var (
errEmptyHexNum = errors.New("empty hex number")
errTooLargeHexNum = errors.New("too large hex number")
)
func readHexInt(r *bufio.Reader) (int, error) {
n := 0
i := 0
var k int
for {
c, err := r.ReadByte()
if err != nil {
if err == io.EOF && i > 0 {
return n, nil
}
return -1, err
}
k = int(hex2intTable[c])
if k == 16 {
if i == 0 {
return -1, errEmptyHexNum
}
r.UnreadByte()
return n, nil
}
if i >= maxHexIntChars {
return -1, errTooLargeHexNum
}
n = (n << 4) | k
i++
}
}
var hexIntBufPool sync.Pool
func writeHexInt(w *bufio.Writer, n int) error {
if n < 0 {
panic("BUG: int must be positive")
}
v := hexIntBufPool.Get()
if v == nil {
v = make([]byte, maxHexIntChars+1)
}
buf := v.([]byte)
i := len(buf) - 1
for {
buf[i] = int2hexbyte(n & 0xf)
n >>= 4
if n == 0 {
break
}
i--
}
_, err := w.Write(buf[i:])
hexIntBufPool.Put(v)
return err
}
func int2hexbyte(n int) byte {
if n < 10 {
return '0' + byte(n)
}
return 'a' + byte(n) - 10
}
func hexCharUpper(c byte) byte {
if c < 10 {
return '0' + c
}
return c - 10 + 'A'
}
var hex2intTable = func() []byte {
b := make([]byte, 256)
for i := 0; i < 256; i++ {
c := byte(16)
if i >= '0' && i <= '9' {
c = byte(i) - '0'
} else if i >= 'a' && i <= 'f' {
c = byte(i) - 'a' + 10
} else if i >= 'A' && i <= 'F' {
c = byte(i) - 'A' + 10
}
b[i] = c
}
return b
}()
const toLower = 'a' - 'A'
var toLowerTable = func() [256]byte {
var a [256]byte
for i := 0; i < 256; i++ {
c := byte(i)
if c >= 'A' && c <= 'Z' {
c += toLower
}
a[i] = c
}
return a
}()
var toUpperTable = func() [256]byte {
var a [256]byte
for i := 0; i < 256; i++ {
c := byte(i)
if c >= 'a' && c <= 'z' {
c -= toLower
}
a[i] = c
}
return a
}()
func lowercaseBytes(b []byte) {
for i := 0; i < len(b); i++ {
p := &b[i]
*p = toLowerTable[*p]
}
}
// b2s converts byte slice to a string without memory allocation.
// See https://groups.google.com/forum/#!msg/Golang-Nuts/ENgbUzYvCuU/90yGx7GUAgAJ .
//
// Note it may break if string and/or slice header will change
// in the future go versions.
func b2s(b []byte) string {
return *(*string)(unsafe.Pointer(&b))
}
// s2b converts string to a byte slice without memory allocation.
//
// Note it may break if string and/or slice header will change
// in the future go versions.
func s2b(s string) (b []byte) {
sh := (*reflect.StringHeader)(unsafe.Pointer(&s))
bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
bh.Data = sh.Data
bh.Len = sh.Len
bh.Cap = sh.Len
return b
}
// AppendUnquotedArg appends url-decoded src to dst and returns appended dst.
//
// dst may point to src. In this case src will be overwritten.
func AppendUnquotedArg(dst, src []byte) []byte {
return decodeArgAppend(dst, src)
}
// AppendQuotedArg appends url-encoded src to dst and returns appended dst.
func AppendQuotedArg(dst, src []byte) []byte {
for _, c := range src {
// See http://www.w3.org/TR/html5/forms.html#form-submission-algorithm
if c >= 'a' && c <= 'z' || c >= 'A' && c <= 'Z' || c >= '0' && c <= '9' ||
c == '*' || c == '-' || c == '.' || c == '_' {
dst = append(dst, c)
} else {
dst = append(dst, '%', hexCharUpper(c>>4), hexCharUpper(c&15))
}
}
return dst
}
func appendQuotedPath(dst, src []byte) []byte {
for _, c := range src {
if c >= 'a' && c <= 'z' || c >= 'A' && c <= 'Z' || c >= '0' && c <= '9' ||
c == '/' || c == '.' || c == ',' || c == '=' || c == ':' || c == '&' || c == '~' || c == '-' || c == '_' {
dst = append(dst, c)
} else {
dst = append(dst, '%', hexCharUpper(c>>4), hexCharUpper(c&15))
}
}
return dst
}
// EqualBytesStr returns true if string(b) == s.
//
// This function has no performance benefits comparing to string(b) == s.
// It is left here for backwards compatibility only.
//
// This function is deperecated and may be deleted soon.
func EqualBytesStr(b []byte, s string) bool {
return string(b) == s
}
// AppendBytesStr appends src to dst and returns the extended dst.
//
// This function has no performance benefits comparing to append(dst, src...).
// It is left here for backwards compatibility only.
//
// This function is deprecated and may be deleted soon.
func AppendBytesStr(dst []byte, src string) []byte {
return append(dst, src...)
}

View file

@ -1,9 +0,0 @@
//go:build !amd64 && !arm64 && !ppc64
// +build !amd64,!arm64,!ppc64
package fasthttp
const (
maxIntChars = 9
maxHexIntChars = 7
)

View file

@ -1,9 +0,0 @@
//go:build amd64 || arm64 || ppc64
// +build amd64 arm64 ppc64
package fasthttp
const (
maxIntChars = 18
maxHexIntChars = 15
)

File diff suppressed because it is too large Load diff

View file

@ -1,440 +0,0 @@
package fasthttp
import (
"bytes"
"fmt"
"io"
"os"
"sync"
"github.com/VictoriaMetrics/fasthttp/stackless"
"github.com/klauspost/compress/flate"
"github.com/klauspost/compress/gzip"
"github.com/klauspost/compress/zlib"
"github.com/valyala/bytebufferpool"
)
// Supported compression levels.
const (
CompressNoCompression = flate.NoCompression
CompressBestSpeed = flate.BestSpeed
CompressBestCompression = flate.BestCompression
CompressDefaultCompression = 6 // flate.DefaultCompression
CompressHuffmanOnly = -2 // flate.HuffmanOnly
)
func acquireGzipReader(r io.Reader) (*gzip.Reader, error) {
v := gzipReaderPool.Get()
if v == nil {
return gzip.NewReader(r)
}
zr := v.(*gzip.Reader)
if err := zr.Reset(r); err != nil {
return nil, err
}
return zr, nil
}
func releaseGzipReader(zr *gzip.Reader) {
zr.Close()
gzipReaderPool.Put(zr)
}
var gzipReaderPool sync.Pool
func acquireFlateReader(r io.Reader) (io.ReadCloser, error) {
v := flateReaderPool.Get()
if v == nil {
zr, err := zlib.NewReader(r)
if err != nil {
return nil, err
}
return zr, nil
}
zr := v.(io.ReadCloser)
if err := resetFlateReader(zr, r); err != nil {
return nil, err
}
return zr, nil
}
func releaseFlateReader(zr io.ReadCloser) {
zr.Close()
flateReaderPool.Put(zr)
}
func resetFlateReader(zr io.ReadCloser, r io.Reader) error {
zrr, ok := zr.(zlib.Resetter)
if !ok {
panic("BUG: zlib.Reader doesn't implement zlib.Resetter???")
}
return zrr.Reset(r, nil)
}
var flateReaderPool sync.Pool
func acquireStacklessGzipWriter(w io.Writer, level int) stackless.Writer {
nLevel := normalizeCompressLevel(level)
p := stacklessGzipWriterPoolMap[nLevel]
v := p.Get()
if v == nil {
return stackless.NewWriter(w, func(w io.Writer) stackless.Writer {
return acquireRealGzipWriter(w, level)
})
}
sw := v.(stackless.Writer)
sw.Reset(w)
return sw
}
func releaseStacklessGzipWriter(sw stackless.Writer, level int) {
sw.Close()
nLevel := normalizeCompressLevel(level)
p := stacklessGzipWriterPoolMap[nLevel]
p.Put(sw)
}
func acquireRealGzipWriter(w io.Writer, level int) *gzip.Writer {
nLevel := normalizeCompressLevel(level)
p := realGzipWriterPoolMap[nLevel]
v := p.Get()
if v == nil {
zw, err := gzip.NewWriterLevel(w, level)
if err != nil {
panic(fmt.Sprintf("BUG: unexpected error from gzip.NewWriterLevel(%d): %s", level, err))
}
return zw
}
zw := v.(*gzip.Writer)
zw.Reset(w)
return zw
}
func releaseRealGzipWriter(zw *gzip.Writer, level int) {
zw.Close()
nLevel := normalizeCompressLevel(level)
p := realGzipWriterPoolMap[nLevel]
p.Put(zw)
}
var (
stacklessGzipWriterPoolMap = newCompressWriterPoolMap()
realGzipWriterPoolMap = newCompressWriterPoolMap()
)
// AppendGzipBytesLevel appends gzipped src to dst using the given
// compression level and returns the resulting dst.
//
// Supported compression levels are:
//
// - CompressNoCompression
// - CompressBestSpeed
// - CompressBestCompression
// - CompressDefaultCompression
// - CompressHuffmanOnly
func AppendGzipBytesLevel(dst, src []byte, level int) []byte {
w := &byteSliceWriter{dst}
WriteGzipLevel(w, src, level)
return w.b
}
// WriteGzipLevel writes gzipped p to w using the given compression level
// and returns the number of compressed bytes written to w.
//
// Supported compression levels are:
//
// - CompressNoCompression
// - CompressBestSpeed
// - CompressBestCompression
// - CompressDefaultCompression
// - CompressHuffmanOnly
func WriteGzipLevel(w io.Writer, p []byte, level int) (int, error) {
switch w.(type) {
case *byteSliceWriter,
*bytes.Buffer,
*ByteBuffer,
*bytebufferpool.ByteBuffer:
// These writers don't block, so we can just use stacklessWriteGzip
ctx := &compressCtx{
w: w,
p: p,
level: level,
}
stacklessWriteGzip(ctx)
return len(p), nil
default:
zw := acquireStacklessGzipWriter(w, level)
n, err := zw.Write(p)
releaseStacklessGzipWriter(zw, level)
return n, err
}
}
var stacklessWriteGzip = stackless.NewFunc(nonblockingWriteGzip)
func nonblockingWriteGzip(ctxv interface{}) {
ctx := ctxv.(*compressCtx)
zw := acquireRealGzipWriter(ctx.w, ctx.level)
_, err := zw.Write(ctx.p)
if err != nil {
panic(fmt.Sprintf("BUG: gzip.Writer.Write for len(p)=%d returned unexpected error: %s", len(ctx.p), err))
}
releaseRealGzipWriter(zw, ctx.level)
}
// WriteGzip writes gzipped p to w and returns the number of compressed
// bytes written to w.
func WriteGzip(w io.Writer, p []byte) (int, error) {
return WriteGzipLevel(w, p, CompressDefaultCompression)
}
// AppendGzipBytes appends gzipped src to dst and returns the resulting dst.
func AppendGzipBytes(dst, src []byte) []byte {
return AppendGzipBytesLevel(dst, src, CompressDefaultCompression)
}
// WriteGunzip writes ungzipped p to w and returns the number of uncompressed
// bytes written to w.
func WriteGunzip(w io.Writer, p []byte) (int, error) {
r := &byteSliceReader{p}
zr, err := acquireGzipReader(r)
if err != nil {
return 0, err
}
n, err := copyZeroAlloc(w, zr)
releaseGzipReader(zr)
nn := int(n)
if int64(nn) != n {
return 0, fmt.Errorf("too much data gunzipped: %d", n)
}
return nn, err
}
// AppendGunzipBytes appends gunzipped src to dst and returns the resulting dst.
func AppendGunzipBytes(dst, src []byte) ([]byte, error) {
w := &byteSliceWriter{dst}
_, err := WriteGunzip(w, src)
return w.b, err
}
// AppendDeflateBytesLevel appends deflated src to dst using the given
// compression level and returns the resulting dst.
//
// Supported compression levels are:
//
// - CompressNoCompression
// - CompressBestSpeed
// - CompressBestCompression
// - CompressDefaultCompression
// - CompressHuffmanOnly
func AppendDeflateBytesLevel(dst, src []byte, level int) []byte {
w := &byteSliceWriter{dst}
WriteDeflateLevel(w, src, level)
return w.b
}
// WriteDeflateLevel writes deflated p to w using the given compression level
// and returns the number of compressed bytes written to w.
//
// Supported compression levels are:
//
// - CompressNoCompression
// - CompressBestSpeed
// - CompressBestCompression
// - CompressDefaultCompression
// - CompressHuffmanOnly
func WriteDeflateLevel(w io.Writer, p []byte, level int) (int, error) {
switch w.(type) {
case *byteSliceWriter,
*bytes.Buffer,
*ByteBuffer,
*bytebufferpool.ByteBuffer:
// These writers don't block, so we can just use stacklessWriteDeflate
ctx := &compressCtx{
w: w,
p: p,
level: level,
}
stacklessWriteDeflate(ctx)
return len(p), nil
default:
zw := acquireStacklessDeflateWriter(w, level)
n, err := zw.Write(p)
releaseStacklessDeflateWriter(zw, level)
return n, err
}
}
var stacklessWriteDeflate = stackless.NewFunc(nonblockingWriteDeflate)
func nonblockingWriteDeflate(ctxv interface{}) {
ctx := ctxv.(*compressCtx)
zw := acquireRealDeflateWriter(ctx.w, ctx.level)
_, err := zw.Write(ctx.p)
if err != nil {
panic(fmt.Sprintf("BUG: zlib.Writer.Write for len(p)=%d returned unexpected error: %s", len(ctx.p), err))
}
releaseRealDeflateWriter(zw, ctx.level)
}
type compressCtx struct {
w io.Writer
p []byte
level int
}
// WriteDeflate writes deflated p to w and returns the number of compressed
// bytes written to w.
func WriteDeflate(w io.Writer, p []byte) (int, error) {
return WriteDeflateLevel(w, p, CompressDefaultCompression)
}
// AppendDeflateBytes appends deflated src to dst and returns the resulting dst.
func AppendDeflateBytes(dst, src []byte) []byte {
return AppendDeflateBytesLevel(dst, src, CompressDefaultCompression)
}
// WriteInflate writes inflated p to w and returns the number of uncompressed
// bytes written to w.
func WriteInflate(w io.Writer, p []byte) (int, error) {
r := &byteSliceReader{p}
zr, err := acquireFlateReader(r)
if err != nil {
return 0, err
}
n, err := copyZeroAlloc(w, zr)
releaseFlateReader(zr)
nn := int(n)
if int64(nn) != n {
return 0, fmt.Errorf("too much data inflated: %d", n)
}
return nn, err
}
// AppendInflateBytes appends inflated src to dst and returns the resulting dst.
func AppendInflateBytes(dst, src []byte) ([]byte, error) {
w := &byteSliceWriter{dst}
_, err := WriteInflate(w, src)
return w.b, err
}
type byteSliceWriter struct {
b []byte
}
func (w *byteSliceWriter) Write(p []byte) (int, error) {
w.b = append(w.b, p...)
return len(p), nil
}
type byteSliceReader struct {
b []byte
}
func (r *byteSliceReader) Read(p []byte) (int, error) {
if len(r.b) == 0 {
return 0, io.EOF
}
n := copy(p, r.b)
r.b = r.b[n:]
return n, nil
}
func acquireStacklessDeflateWriter(w io.Writer, level int) stackless.Writer {
nLevel := normalizeCompressLevel(level)
p := stacklessDeflateWriterPoolMap[nLevel]
v := p.Get()
if v == nil {
return stackless.NewWriter(w, func(w io.Writer) stackless.Writer {
return acquireRealDeflateWriter(w, level)
})
}
sw := v.(stackless.Writer)
sw.Reset(w)
return sw
}
func releaseStacklessDeflateWriter(sw stackless.Writer, level int) {
sw.Close()
nLevel := normalizeCompressLevel(level)
p := stacklessDeflateWriterPoolMap[nLevel]
p.Put(sw)
}
func acquireRealDeflateWriter(w io.Writer, level int) *zlib.Writer {
nLevel := normalizeCompressLevel(level)
p := realDeflateWriterPoolMap[nLevel]
v := p.Get()
if v == nil {
zw, err := zlib.NewWriterLevel(w, level)
if err != nil {
panic(fmt.Sprintf("BUG: unexpected error from zlib.NewWriterLevel(%d): %s", level, err))
}
return zw
}
zw := v.(*zlib.Writer)
zw.Reset(w)
return zw
}
func releaseRealDeflateWriter(zw *zlib.Writer, level int) {
zw.Close()
nLevel := normalizeCompressLevel(level)
p := realDeflateWriterPoolMap[nLevel]
p.Put(zw)
}
var (
stacklessDeflateWriterPoolMap = newCompressWriterPoolMap()
realDeflateWriterPoolMap = newCompressWriterPoolMap()
)
func newCompressWriterPoolMap() []*sync.Pool {
// Initialize pools for all the compression levels defined
// in https://golang.org/pkg/compress/flate/#pkg-constants .
// Compression levels are normalized with normalizeCompressLevel,
// so the fit [0..11].
var m []*sync.Pool
for i := 0; i < 12; i++ {
m = append(m, &sync.Pool{})
}
return m
}
func isFileCompressible(f *os.File, minCompressRatio float64) bool {
// Try compressing the first 4kb of of the file
// and see if it can be compressed by more than
// the given minCompressRatio.
b := AcquireByteBuffer()
zw := acquireStacklessGzipWriter(b, CompressDefaultCompression)
lr := &io.LimitedReader{
R: f,
N: 4096,
}
_, err := copyZeroAlloc(zw, lr)
releaseStacklessGzipWriter(zw, CompressDefaultCompression)
f.Seek(0, 0)
if err != nil {
return false
}
n := 4096 - lr.N
zn := len(b.B)
ReleaseByteBuffer(b)
return float64(zn) < float64(n)*minCompressRatio
}
// normalizes compression level into [0..11], so it could be used as an index
// in *PoolMap.
func normalizeCompressLevel(level int) int {
// -2 is the lowest compression level - CompressHuffmanOnly
// 9 is the highest compression level - CompressBestCompression
if level < -2 || level > 9 {
level = CompressDefaultCompression
}
return level + 2
}

View file

@ -1,396 +0,0 @@
package fasthttp
import (
"bytes"
"errors"
"io"
"sync"
"time"
)
var zeroTime time.Time
var (
// CookieExpireDelete may be set on Cookie.Expire for expiring the given cookie.
CookieExpireDelete = time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC)
// CookieExpireUnlimited indicates that the cookie doesn't expire.
CookieExpireUnlimited = zeroTime
)
// AcquireCookie returns an empty Cookie object from the pool.
//
// The returned object may be returned back to the pool with ReleaseCookie.
// This allows reducing GC load.
func AcquireCookie() *Cookie {
return cookiePool.Get().(*Cookie)
}
// ReleaseCookie returns the Cookie object acquired with AcquireCookie back
// to the pool.
//
// Do not access released Cookie object, otherwise data races may occur.
func ReleaseCookie(c *Cookie) {
c.Reset()
cookiePool.Put(c)
}
var cookiePool = &sync.Pool{
New: func() interface{} {
return &Cookie{}
},
}
// Cookie represents HTTP response cookie.
//
// Do not copy Cookie objects. Create new object and use CopyTo instead.
//
// Cookie instance MUST NOT be used from concurrently running goroutines.
type Cookie struct {
noCopy noCopy
key []byte
value []byte
expire time.Time
domain []byte
path []byte
httpOnly bool
secure bool
bufKV argsKV
buf []byte
}
// CopyTo copies src cookie to c.
func (c *Cookie) CopyTo(src *Cookie) {
c.Reset()
c.key = append(c.key[:0], src.key...)
c.value = append(c.value[:0], src.value...)
c.expire = src.expire
c.domain = append(c.domain[:0], src.domain...)
c.path = append(c.path[:0], src.path...)
c.httpOnly = src.httpOnly
c.secure = src.secure
}
// HTTPOnly returns true if the cookie is http only.
func (c *Cookie) HTTPOnly() bool {
return c.httpOnly
}
// SetHTTPOnly sets cookie's httpOnly flag to the given value.
func (c *Cookie) SetHTTPOnly(httpOnly bool) {
c.httpOnly = httpOnly
}
// Secure returns true if the cookie is secure.
func (c *Cookie) Secure() bool {
return c.secure
}
// SetSecure sets cookie's secure flag to the given value.
func (c *Cookie) SetSecure(secure bool) {
c.secure = secure
}
// Path returns cookie path.
func (c *Cookie) Path() []byte {
return c.path
}
// SetPath sets cookie path.
func (c *Cookie) SetPath(path string) {
c.buf = append(c.buf[:0], path...)
c.path = normalizePath(c.path, c.buf)
}
// SetPathBytes sets cookie path.
func (c *Cookie) SetPathBytes(path []byte) {
c.buf = append(c.buf[:0], path...)
c.path = normalizePath(c.path, c.buf)
}
// Domain returns cookie domain.
//
// The returned domain is valid until the next Cookie modification method call.
func (c *Cookie) Domain() []byte {
return c.domain
}
// SetDomain sets cookie domain.
func (c *Cookie) SetDomain(domain string) {
c.domain = append(c.domain[:0], domain...)
}
// SetDomainBytes sets cookie domain.
func (c *Cookie) SetDomainBytes(domain []byte) {
c.domain = append(c.domain[:0], domain...)
}
// Expire returns cookie expiration time.
//
// CookieExpireUnlimited is returned if cookie doesn't expire
func (c *Cookie) Expire() time.Time {
expire := c.expire
if expire.IsZero() {
expire = CookieExpireUnlimited
}
return expire
}
// SetExpire sets cookie expiration time.
//
// Set expiration time to CookieExpireDelete for expiring (deleting)
// the cookie on the client.
//
// By default cookie lifetime is limited by browser session.
func (c *Cookie) SetExpire(expire time.Time) {
c.expire = expire
}
// Value returns cookie value.
//
// The returned value is valid until the next Cookie modification method call.
func (c *Cookie) Value() []byte {
return c.value
}
// SetValue sets cookie value.
func (c *Cookie) SetValue(value string) {
c.value = append(c.value[:0], value...)
}
// SetValueBytes sets cookie value.
func (c *Cookie) SetValueBytes(value []byte) {
c.value = append(c.value[:0], value...)
}
// Key returns cookie name.
//
// The returned value is valid until the next Cookie modification method call.
func (c *Cookie) Key() []byte {
return c.key
}
// SetKey sets cookie name.
func (c *Cookie) SetKey(key string) {
c.key = append(c.key[:0], key...)
}
// SetKeyBytes sets cookie name.
func (c *Cookie) SetKeyBytes(key []byte) {
c.key = append(c.key[:0], key...)
}
// Reset clears the cookie.
func (c *Cookie) Reset() {
c.key = c.key[:0]
c.value = c.value[:0]
c.expire = zeroTime
c.domain = c.domain[:0]
c.path = c.path[:0]
c.httpOnly = false
c.secure = false
}
// AppendBytes appends cookie representation to dst and returns
// the extended dst.
func (c *Cookie) AppendBytes(dst []byte) []byte {
if len(c.key) > 0 {
dst = append(dst, c.key...)
dst = append(dst, '=')
}
dst = append(dst, c.value...)
if !c.expire.IsZero() {
c.bufKV.value = AppendHTTPDate(c.bufKV.value[:0], c.expire)
dst = append(dst, ';', ' ')
dst = append(dst, strCookieExpires...)
dst = append(dst, '=')
dst = append(dst, c.bufKV.value...)
}
if len(c.domain) > 0 {
dst = appendCookiePart(dst, strCookieDomain, c.domain)
}
if len(c.path) > 0 {
dst = appendCookiePart(dst, strCookiePath, c.path)
}
if c.httpOnly {
dst = append(dst, ';', ' ')
dst = append(dst, strCookieHTTPOnly...)
}
if c.secure {
dst = append(dst, ';', ' ')
dst = append(dst, strCookieSecure...)
}
return dst
}
// Cookie returns cookie representation.
//
// The returned value is valid until the next call to Cookie methods.
func (c *Cookie) Cookie() []byte {
c.buf = c.AppendBytes(c.buf[:0])
return c.buf
}
// String returns cookie representation.
func (c *Cookie) String() string {
return string(c.Cookie())
}
// WriteTo writes cookie representation to w.
//
// WriteTo implements io.WriterTo interface.
func (c *Cookie) WriteTo(w io.Writer) (int64, error) {
n, err := w.Write(c.Cookie())
return int64(n), err
}
var errNoCookies = errors.New("no cookies found")
// Parse parses Set-Cookie header.
func (c *Cookie) Parse(src string) error {
c.buf = append(c.buf[:0], src...)
return c.ParseBytes(c.buf)
}
// ParseBytes parses Set-Cookie header.
func (c *Cookie) ParseBytes(src []byte) error {
c.Reset()
var s cookieScanner
s.b = src
kv := &c.bufKV
if !s.next(kv) {
return errNoCookies
}
c.key = append(c.key[:0], kv.key...)
c.value = append(c.value[:0], kv.value...)
for s.next(kv) {
if len(kv.key) == 0 && len(kv.value) == 0 {
continue
}
switch string(kv.key) {
case "expires":
v := b2s(kv.value)
exptime, err := time.ParseInLocation(time.RFC1123, v, time.UTC)
if err != nil {
return err
}
c.expire = exptime
case "domain":
c.domain = append(c.domain[:0], kv.value...)
case "path":
c.path = append(c.path[:0], kv.value...)
case "":
switch string(kv.value) {
case "HttpOnly":
c.httpOnly = true
case "secure":
c.secure = true
}
}
}
return nil
}
func appendCookiePart(dst, key, value []byte) []byte {
dst = append(dst, ';', ' ')
dst = append(dst, key...)
dst = append(dst, '=')
return append(dst, value...)
}
func getCookieKey(dst, src []byte) []byte {
n := bytes.IndexByte(src, '=')
if n >= 0 {
src = src[:n]
}
return decodeCookieArg(dst, src, false)
}
func appendRequestCookieBytes(dst []byte, cookies []argsKV) []byte {
for i, n := 0, len(cookies); i < n; i++ {
kv := &cookies[i]
if len(kv.key) > 0 {
dst = append(dst, kv.key...)
dst = append(dst, '=')
}
dst = append(dst, kv.value...)
if i+1 < n {
dst = append(dst, ';', ' ')
}
}
return dst
}
func parseRequestCookies(cookies []argsKV, src []byte) []argsKV {
var s cookieScanner
s.b = src
var kv *argsKV
cookies, kv = allocArg(cookies)
for s.next(kv) {
if len(kv.key) > 0 || len(kv.value) > 0 {
cookies, kv = allocArg(cookies)
}
}
return releaseArg(cookies)
}
type cookieScanner struct {
b []byte
}
func (s *cookieScanner) next(kv *argsKV) bool {
b := s.b
if len(b) == 0 {
return false
}
isKey := true
k := 0
for i, c := range b {
switch c {
case '=':
if isKey {
isKey = false
kv.key = decodeCookieArg(kv.key, b[:i], false)
k = i + 1
}
case ';':
if isKey {
kv.key = kv.key[:0]
}
kv.value = decodeCookieArg(kv.value, b[k:i], true)
s.b = b[i+1:]
return true
}
}
if isKey {
kv.key = kv.key[:0]
}
kv.value = decodeCookieArg(kv.value, b[k:], true)
s.b = b[len(b):]
return true
}
func decodeCookieArg(dst, src []byte, skipQuotes bool) []byte {
for len(src) > 0 && src[0] == ' ' {
src = src[1:]
}
for len(src) > 0 && src[len(src)-1] == ' ' {
src = src[:len(src)-1]
}
if skipQuotes {
if len(src) > 1 && src[0] == '"' && src[len(src)-1] == '"' {
src = src[1 : len(src)-1]
}
}
return append(dst[:0], src...)
}

View file

@ -1,59 +0,0 @@
/*
Package fasthttp provides fast HTTP server and client API.
Fasthttp provides the following features:
- Optimized for speed. Easily handles more than 100K qps and more than 1M
concurrent keep-alive connections on modern hardware.
- Optimized for low memory usage.
- Easy 'Connection: Upgrade' support via RequestCtx.Hijack.
- Server supports requests' pipelining. Multiple requests may be read from
a single network packet and multiple responses may be sent in a single
network packet. This may be useful for highly loaded REST services.
- Server provides the following anti-DoS limits:
- The number of concurrent connections.
- The number of concurrent connections per client IP.
- The number of requests per connection.
- Request read timeout.
- Response write timeout.
- Maximum request header size.
- Maximum request body size.
- Maximum request execution time.
- Maximum keep-alive connection lifetime.
- Early filtering out non-GET requests.
- A lot of additional useful info is exposed to request handler:
- Server and client address.
- Per-request logger.
- Unique request id.
- Request start time.
- Connection start time.
- Request sequence number for the current connection.
- Client supports automatic retry on idempotent requests' failure.
- Fasthttp API is designed with the ability to extend existing client
and server implementations or to write custom client and server
implementations from scratch.
*/
package fasthttp

View file

@ -1,2 +0,0 @@
// Package fasthttputil provides utility functions for fasthttp.
package fasthttputil

View file

@ -1,5 +0,0 @@
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIBpQbZ6a5jL1Yh4wdP6yZk4MKjYWArD/QOLENFw8vbELoAoGCCqGSM49
AwEHoUQDQgAEKQCZWgE2IBhb47ot8MIs1D4KSisHYlZ41IWyeutpjb0fjwwIhimh
pl1Qld1/d2j3Z3vVyfa5yD+ncV7qCFZuSg==
-----END EC PRIVATE KEY-----

View file

@ -1,10 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIBbTCCAROgAwIBAgIQPo718S+K+G7hc1SgTEU4QDAKBggqhkjOPQQDAjASMRAw
DgYDVQQKEwdBY21lIENvMB4XDTE3MDQyMDIxMDExNFoXDTE4MDQyMDIxMDExNFow
EjEQMA4GA1UEChMHQWNtZSBDbzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABCkA
mVoBNiAYW+O6LfDCLNQ+CkorB2JWeNSFsnrraY29H48MCIYpoaZdUJXdf3do92d7
1cn2ucg/p3Fe6ghWbkqjSzBJMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggr
BgEFBQcDATAMBgNVHRMBAf8EAjAAMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAKBggq
hkjOPQQDAgNIADBFAiEAoLAIQkvSuIcHUqyWroA6yWYw2fznlRH/uO9/hMCxUCEC
IClRYb/5O9eD/Eq/ozPnwNpsQHOeYefEhadJ/P82y0lG
-----END CERTIFICATE-----

View file

@ -1,84 +0,0 @@
package fasthttputil
import (
"fmt"
"net"
"sync"
)
// InmemoryListener provides in-memory dialer<->net.Listener implementation.
//
// It may be used either for fast in-process client<->server communcations
// without network stack overhead or for client<->server tests.
type InmemoryListener struct {
lock sync.Mutex
closed bool
conns chan net.Conn
}
// NewInmemoryListener returns new in-memory dialer<->net.Listener.
func NewInmemoryListener() *InmemoryListener {
return &InmemoryListener{
conns: make(chan net.Conn, 1024),
}
}
// Accept implements net.Listener's Accept.
//
// It is safe calling Accept from concurrently running goroutines.
//
// Accept returns new connection per each Dial call.
func (ln *InmemoryListener) Accept() (net.Conn, error) {
c, ok := <-ln.conns
if !ok {
return nil, fmt.Errorf("InmemoryListener is already closed: use of closed network connection")
}
return c, nil
}
// Close implements net.Listener's Close.
func (ln *InmemoryListener) Close() error {
var err error
ln.lock.Lock()
if !ln.closed {
close(ln.conns)
ln.closed = true
} else {
err = fmt.Errorf("InmemoryListener is already closed")
}
ln.lock.Unlock()
return err
}
// Addr implements net.Listener's Addr.
func (ln *InmemoryListener) Addr() net.Addr {
return &net.UnixAddr{
Name: "InmemoryListener",
Net: "memory",
}
}
// Dial creates new client<->server connection, enqueues server side
// of the connection to Accept and returns client side of the connection.
//
// It is safe calling Dial from concurrently running goroutines.
func (ln *InmemoryListener) Dial() (net.Conn, error) {
pc := NewPipeConns()
cConn := pc.Conn1()
sConn := pc.Conn2()
ln.lock.Lock()
if !ln.closed {
ln.conns <- sConn
} else {
sConn.Close()
cConn.Close()
cConn = nil
}
ln.lock.Unlock()
if cConn == nil {
return nil, fmt.Errorf("InmemoryListener is already closed")
}
return cConn, nil
}

View file

@ -1,283 +0,0 @@
package fasthttputil
import (
"errors"
"io"
"net"
"sync"
"time"
)
// NewPipeConns returns new bi-directonal connection pipe.
func NewPipeConns() *PipeConns {
ch1 := make(chan *byteBuffer, 4)
ch2 := make(chan *byteBuffer, 4)
pc := &PipeConns{
stopCh: make(chan struct{}),
}
pc.c1.rCh = ch1
pc.c1.wCh = ch2
pc.c2.rCh = ch2
pc.c2.wCh = ch1
pc.c1.pc = pc
pc.c2.pc = pc
return pc
}
// PipeConns provides bi-directional connection pipe,
// which use in-process memory as a transport.
//
// PipeConns must be created by calling NewPipeConns.
//
// PipeConns has the following additional features comparing to connections
// returned from net.Pipe():
//
// * It is faster.
// * It buffers Write calls, so there is no need to have concurrent goroutine
// calling Read in order to unblock each Write call.
// * It supports read and write deadlines.
//
type PipeConns struct {
c1 pipeConn
c2 pipeConn
stopCh chan struct{}
stopChLock sync.Mutex
}
// Conn1 returns the first end of bi-directional pipe.
//
// Data written to Conn1 may be read from Conn2.
// Data written to Conn2 may be read from Conn1.
func (pc *PipeConns) Conn1() net.Conn {
return &pc.c1
}
// Conn2 returns the second end of bi-directional pipe.
//
// Data written to Conn2 may be read from Conn1.
// Data written to Conn1 may be read from Conn2.
func (pc *PipeConns) Conn2() net.Conn {
return &pc.c2
}
// Close closes pipe connections.
func (pc *PipeConns) Close() error {
pc.stopChLock.Lock()
select {
case <-pc.stopCh:
default:
close(pc.stopCh)
}
pc.stopChLock.Unlock()
return nil
}
type pipeConn struct {
b *byteBuffer
bb []byte
rCh chan *byteBuffer
wCh chan *byteBuffer
pc *PipeConns
readDeadlineTimer *time.Timer
writeDeadlineTimer *time.Timer
readDeadlineCh <-chan time.Time
writeDeadlineCh <-chan time.Time
}
func (c *pipeConn) Write(p []byte) (int, error) {
b := acquireByteBuffer()
b.b = append(b.b[:0], p...)
select {
case <-c.pc.stopCh:
releaseByteBuffer(b)
return 0, errConnectionClosed
default:
}
select {
case c.wCh <- b:
default:
select {
case c.wCh <- b:
case <-c.writeDeadlineCh:
c.writeDeadlineCh = closedDeadlineCh
return 0, ErrTimeout
case <-c.pc.stopCh:
releaseByteBuffer(b)
return 0, errConnectionClosed
}
}
return len(p), nil
}
func (c *pipeConn) Read(p []byte) (int, error) {
mayBlock := true
nn := 0
for len(p) > 0 {
n, err := c.read(p, mayBlock)
nn += n
if err != nil {
if !mayBlock && err == errWouldBlock {
err = nil
}
return nn, err
}
p = p[n:]
mayBlock = false
}
return nn, nil
}
func (c *pipeConn) read(p []byte, mayBlock bool) (int, error) {
if len(c.bb) == 0 {
if err := c.readNextByteBuffer(mayBlock); err != nil {
return 0, err
}
}
n := copy(p, c.bb)
c.bb = c.bb[n:]
return n, nil
}
func (c *pipeConn) readNextByteBuffer(mayBlock bool) error {
releaseByteBuffer(c.b)
c.b = nil
select {
case c.b = <-c.rCh:
default:
if !mayBlock {
return errWouldBlock
}
select {
case c.b = <-c.rCh:
case <-c.readDeadlineCh:
c.readDeadlineCh = closedDeadlineCh
// rCh may contain data when deadline is reached.
// Read the data before returning ErrTimeout.
select {
case c.b = <-c.rCh:
default:
return ErrTimeout
}
case <-c.pc.stopCh:
// rCh may contain data when stopCh is closed.
// Read the data before returning EOF.
select {
case c.b = <-c.rCh:
default:
return io.EOF
}
}
}
c.bb = c.b.b
return nil
}
var (
errWouldBlock = errors.New("would block")
errConnectionClosed = errors.New("connection closed")
// ErrTimeout is returned from Read() or Write() on timeout.
ErrTimeout = errors.New("timeout")
)
func (c *pipeConn) Close() error {
return c.pc.Close()
}
func (c *pipeConn) LocalAddr() net.Addr {
return pipeAddr(0)
}
func (c *pipeConn) RemoteAddr() net.Addr {
return pipeAddr(0)
}
func (c *pipeConn) SetDeadline(deadline time.Time) error {
c.SetReadDeadline(deadline)
c.SetWriteDeadline(deadline)
return nil
}
func (c *pipeConn) SetReadDeadline(deadline time.Time) error {
if c.readDeadlineTimer == nil {
c.readDeadlineTimer = time.NewTimer(time.Hour)
}
c.readDeadlineCh = updateTimer(c.readDeadlineTimer, deadline)
return nil
}
func (c *pipeConn) SetWriteDeadline(deadline time.Time) error {
if c.writeDeadlineTimer == nil {
c.writeDeadlineTimer = time.NewTimer(time.Hour)
}
c.writeDeadlineCh = updateTimer(c.writeDeadlineTimer, deadline)
return nil
}
func updateTimer(t *time.Timer, deadline time.Time) <-chan time.Time {
if !t.Stop() {
select {
case <-t.C:
default:
}
}
if deadline.IsZero() {
return nil
}
d := -time.Since(deadline)
if d <= 0 {
return closedDeadlineCh
}
t.Reset(d)
return t.C
}
var closedDeadlineCh = func() <-chan time.Time {
ch := make(chan time.Time)
close(ch)
return ch
}()
type pipeAddr int
func (pipeAddr) Network() string {
return "pipe"
}
func (pipeAddr) String() string {
return "pipe"
}
type byteBuffer struct {
b []byte
}
func acquireByteBuffer() *byteBuffer {
return byteBufferPool.Get().(*byteBuffer)
}
func releaseByteBuffer(b *byteBuffer) {
if b != nil {
byteBufferPool.Put(b)
}
}
var byteBufferPool = &sync.Pool{
New: func() interface{} {
return &byteBuffer{
b: make([]byte, 1024),
}
},
}

View file

@ -1,28 +0,0 @@
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQD4IQusAs8PJdnG
3mURt/AXtgC+ceqLOatJ49JJE1VPTkMAy+oE1f1XvkMrYsHqmDf6GWVzgVXryL4U
wq2/nJSm56ddhN55nI8oSN3dtywUB8/ShelEN73nlN77PeD9tl6NksPwWaKrqxq0
FlabRPZSQCfmgZbhDV8Sa8mfCkFU0G0lit6kLGceCKMvmW+9Bz7ebsYmVdmVMxmf
IJStFD44lWFTdUc65WISKEdW2ELcUefb0zOLw+0PCbXFGJH5x5ktksW8+BBk2Hkg
GeQRL/qPCccthbScO0VgNj3zJ3ZZL0ObSDAbvNDG85joeNjDNq5DT/BAZ0bOSbEF
sh+f9BAzAgMBAAECggEBAJWv2cq7Jw6MVwSRxYca38xuD6TUNBopgBvjREixURW2
sNUaLuMb9Omp7fuOaE2N5rcJ+xnjPGIxh/oeN5MQctz9gwn3zf6vY+15h97pUb4D
uGvYPRDaT8YVGS+X9NMZ4ZCmqW2lpWzKnCFoGHcy8yZLbcaxBsRdvKzwOYGoPiFb
K2QuhXZ/1UPmqK9i2DFKtj40X6vBszTNboFxOVpXrPu0FJwLVSDf2hSZ4fMM0DH3
YqwKcYf5te+hxGKgrqRA3tn0NCWii0in6QIwXMC+kMw1ebg/tZKqyDLMNptAK8J+
DVw9m5X1seUHS5ehU/g2jrQrtK5WYn7MrFK4lBzlRwECgYEA/d1TeANYECDWRRDk
B0aaRZs87Rwl/J9PsvbsKvtU/bX+OfSOUjOa9iQBqn0LmU8GqusEET/QVUfocVwV
Bggf/5qDLxz100Rj0ags/yE/kNr0Bb31kkkKHFMnCT06YasR7qKllwrAlPJvQv9x
IzBKq+T/Dx08Wep9bCRSFhzRCnsCgYEA+jdeZXTDr/Vz+D2B3nAw1frqYFfGnEVY
wqmoK3VXMDkGuxsloO2rN+SyiUo3JNiQNPDub/t7175GH5pmKtZOlftePANsUjBj
wZ1D0rI5Bxu/71ibIUYIRVmXsTEQkh/ozoh3jXCZ9+bLgYiYx7789IUZZSokFQ3D
FICUT9KJ36kCgYAGoq9Y1rWJjmIrYfqj2guUQC+CfxbbGIrrwZqAsRsSmpwvhZ3m
tiSZxG0quKQB+NfSxdvQW5ulbwC7Xc3K35F+i9pb8+TVBdeaFkw+yu6vaZmxQLrX
fQM/pEjD7A7HmMIaO7QaU5SfEAsqdCTP56Y8AftMuNXn/8IRfo2KuGwaWwKBgFpU
ILzJoVdlad9E/Rw7LjYhZfkv1uBVXIyxyKcfrkEXZSmozDXDdxsvcZCEfVHM6Ipk
K/+7LuMcqp4AFEAEq8wTOdq6daFaHLkpt/FZK6M4TlruhtpFOPkoNc3e45eM83OT
6mziKINJC1CQ6m65sQHpBtjxlKMRG8rL/D6wx9s5AoGBAMRlqNPMwglT3hvDmsAt
9Lf9pdmhERUlHhD8bj8mDaBj2Aqv7f6VRJaYZqP403pKKQexuqcn80mtjkSAPFkN
Cj7BVt/RXm5uoxDTnfi26RF9F6yNDEJ7UU9+peBr99aazF/fTgW/1GcMkQnum8uV
c257YgaWmjK9uB0Y2r2VxS0G
-----END PRIVATE KEY-----

View file

@ -1,17 +0,0 @@
-----BEGIN CERTIFICATE-----
MIICujCCAaKgAwIBAgIJAMbXnKZ/cikUMA0GCSqGSIb3DQEBCwUAMBUxEzARBgNV
BAMTCnVidW50dS5uYW4wHhcNMTUwMjA0MDgwMTM5WhcNMjUwMjAxMDgwMTM5WjAV
MRMwEQYDVQQDEwp1YnVudHUubmFuMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEA+CELrALPDyXZxt5lEbfwF7YAvnHqizmrSePSSRNVT05DAMvqBNX9V75D
K2LB6pg3+hllc4FV68i+FMKtv5yUpuenXYTeeZyPKEjd3bcsFAfP0oXpRDe955Te
+z3g/bZejZLD8Fmiq6satBZWm0T2UkAn5oGW4Q1fEmvJnwpBVNBtJYrepCxnHgij
L5lvvQc+3m7GJlXZlTMZnyCUrRQ+OJVhU3VHOuViEihHVthC3FHn29Mzi8PtDwm1
xRiR+ceZLZLFvPgQZNh5IBnkES/6jwnHLYW0nDtFYDY98yd2WS9Dm0gwG7zQxvOY
6HjYwzauQ0/wQGdGzkmxBbIfn/QQMwIDAQABow0wCzAJBgNVHRMEAjAAMA0GCSqG
SIb3DQEBCwUAA4IBAQBQjKm/4KN/iTgXbLTL3i7zaxYXFLXsnT1tF+ay4VA8aj98
L3JwRTciZ3A5iy/W4VSCt3eASwOaPWHKqDBB5RTtL73LoAqsWmO3APOGQAbixcQ2
45GXi05OKeyiYRi1Nvq7Unv9jUkRDHUYVPZVSAjCpsXzPhFkmZoTRxmx5l0ZF7Li
K91lI5h+eFq0dwZwrmlPambyh1vQUi70VHv8DNToVU29kel7YLbxGbuqETfhrcy6
X+Mha6RYITkAn5FqsZcKMsc9eYGEF4l3XV+oS7q6xfTxktYJMFTI18J0lQ2Lv/CI
whdMnYGntDQBE/iFCrJEGNsKGc38796GBOb5j+zd
-----END CERTIFICATE-----

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,183 +0,0 @@
package fasthttp
import (
"sync"
"sync/atomic"
"time"
)
// BalancingClient is the interface for clients, which may be passed
// to LBClient.Clients.
type BalancingClient interface {
DoDeadline(req *Request, resp *Response, deadline time.Time) error
PendingRequests() int
}
// LBClient balances requests among available LBClient.Clients.
//
// It has the following features:
//
// - Balances load among available clients using 'least loaded' + 'round robin'
// hybrid technique.
// - Dynamically decreases load on unhealthy clients.
//
// It is forbidden copying LBClient instances. Create new instances instead.
//
// It is safe calling LBClient methods from concurrently running goroutines.
type LBClient struct {
noCopy noCopy
// Clients must contain non-zero clients list.
// Incoming requests are balanced among these clients.
Clients []BalancingClient
// HealthCheck is a callback called after each request.
//
// The request, response and the error returned by the client
// is passed to HealthCheck, so the callback may determine whether
// the client is healthy.
//
// Load on the current client is decreased if HealthCheck returns false.
//
// By default HealthCheck returns false if err != nil.
HealthCheck func(req *Request, resp *Response, err error) bool
// Timeout is the request timeout used when calling LBClient.Do.
//
// DefaultLBClientTimeout is used by default.
Timeout time.Duration
cs []*lbClient
// nextIdx is for spreading requests among equally loaded clients
// in a round-robin fashion.
nextIdx uint32
once sync.Once
}
// DefaultLBClientTimeout is the default request timeout used by LBClient
// when calling LBClient.Do.
//
// The timeout may be overriden via LBClient.Timeout.
const DefaultLBClientTimeout = time.Second
// DoDeadline calls DoDeadline on the least loaded client
func (cc *LBClient) DoDeadline(req *Request, resp *Response, deadline time.Time) error {
return cc.get().DoDeadline(req, resp, deadline)
}
// DoTimeout calculates deadline and calls DoDeadline on the least loaded client
func (cc *LBClient) DoTimeout(req *Request, resp *Response, timeout time.Duration) error {
deadline := time.Now().Add(timeout)
return cc.get().DoDeadline(req, resp, deadline)
}
// Do calls calculates deadline using LBClient.Timeout and calls DoDeadline
// on the least loaded client.
func (cc *LBClient) Do(req *Request, resp *Response) error {
timeout := cc.Timeout
if timeout <= 0 {
timeout = DefaultLBClientTimeout
}
return cc.DoTimeout(req, resp, timeout)
}
func (cc *LBClient) init() {
if len(cc.Clients) == 0 {
panic("BUG: LBClient.Clients cannot be empty")
}
for _, c := range cc.Clients {
cc.cs = append(cc.cs, &lbClient{
c: c,
healthCheck: cc.HealthCheck,
})
}
// Randomize nextIdx in order to prevent initial servers'
// hammering from a cluster of identical LBClients.
cc.nextIdx = uint32(time.Now().UnixNano())
}
func (cc *LBClient) get() *lbClient {
cc.once.Do(cc.init)
cs := cc.cs
idx := atomic.AddUint32(&cc.nextIdx, 1)
idx %= uint32(len(cs))
minC := cs[idx]
minN := minC.PendingRequests()
if minN == 0 {
return minC
}
for _, c := range cs[idx+1:] {
n := c.PendingRequests()
if n == 0 {
return c
}
if n < minN {
minC = c
minN = n
}
}
for _, c := range cs[:idx] {
n := c.PendingRequests()
if n == 0 {
return c
}
if n < minN {
minC = c
minN = n
}
}
return minC
}
type lbClient struct {
c BalancingClient
healthCheck func(req *Request, resp *Response, err error) bool
penalty uint32
}
func (c *lbClient) DoDeadline(req *Request, resp *Response, deadline time.Time) error {
err := c.c.DoDeadline(req, resp, deadline)
if !c.isHealthy(req, resp, err) && c.incPenalty() {
// Penalize the client returning error, so the next requests
// are routed to another clients.
time.AfterFunc(penaltyDuration, c.decPenalty)
}
return err
}
func (c *lbClient) PendingRequests() int {
n := c.c.PendingRequests()
m := atomic.LoadUint32(&c.penalty)
return n + int(m)
}
func (c *lbClient) isHealthy(req *Request, resp *Response, err error) bool {
if c.healthCheck == nil {
return err == nil
}
return c.healthCheck(req, resp, err)
}
func (c *lbClient) incPenalty() bool {
m := atomic.AddUint32(&c.penalty, 1)
if m > maxPenalty {
c.decPenalty()
return false
}
return true
}
func (c *lbClient) decPenalty() {
atomic.AddUint32(&c.penalty, ^uint32(0))
}
const (
maxPenalty = 300
penaltyDuration = 3 * time.Second
)

View file

@ -1,9 +0,0 @@
package fasthttp
// Embed this type into a struct, which mustn't be copied,
// so `go vet` gives a warning if this struct is copied.
//
// See https://github.com/golang/go/issues/8005#issuecomment-190753527 for details.
type noCopy struct{}
func (*noCopy) Lock() {}

View file

@ -1,100 +0,0 @@
package fasthttp
import (
"fmt"
"net"
"sync"
)
type perIPConnCounter struct {
pool sync.Pool
lock sync.Mutex
m map[uint32]int
}
func (cc *perIPConnCounter) Register(ip uint32) int {
cc.lock.Lock()
if cc.m == nil {
cc.m = make(map[uint32]int)
}
n := cc.m[ip] + 1
cc.m[ip] = n
cc.lock.Unlock()
return n
}
func (cc *perIPConnCounter) Unregister(ip uint32) {
cc.lock.Lock()
if cc.m == nil {
cc.lock.Unlock()
panic("BUG: perIPConnCounter.Register() wasn't called")
}
n := cc.m[ip] - 1
if n < 0 {
cc.lock.Unlock()
panic(fmt.Sprintf("BUG: negative per-ip counter=%d for ip=%d", n, ip))
}
cc.m[ip] = n
cc.lock.Unlock()
}
type perIPConn struct {
net.Conn
ip uint32
perIPConnCounter *perIPConnCounter
}
func acquirePerIPConn(conn net.Conn, ip uint32, counter *perIPConnCounter) *perIPConn {
v := counter.pool.Get()
if v == nil {
v = &perIPConn{
perIPConnCounter: counter,
}
}
c := v.(*perIPConn)
c.Conn = conn
c.ip = ip
return c
}
func releasePerIPConn(c *perIPConn) {
c.Conn = nil
c.perIPConnCounter.pool.Put(c)
}
func (c *perIPConn) Close() error {
err := c.Conn.Close()
c.perIPConnCounter.Unregister(c.ip)
releasePerIPConn(c)
return err
}
func getUint32IP(c net.Conn) uint32 {
return ip2uint32(getConnIP4(c))
}
func getConnIP4(c net.Conn) net.IP {
addr := c.RemoteAddr()
ipAddr, ok := addr.(*net.TCPAddr)
if !ok {
return net.IPv4zero
}
return ipAddr.IP.To4()
}
func ip2uint32(ip net.IP) uint32 {
if len(ip) != 4 {
return 0
}
return uint32(ip[0])<<24 | uint32(ip[1])<<16 | uint32(ip[2])<<8 | uint32(ip[3])
}
func uint322ip(ip uint32) net.IP {
b := make([]byte, 4)
b[0] = byte(ip >> 24)
b[1] = byte(ip >> 16)
b[2] = byte(ip >> 8)
b[3] = byte(ip)
return b
}

File diff suppressed because it is too large Load diff

View file

@ -1,28 +0,0 @@
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQD4IQusAs8PJdnG
3mURt/AXtgC+ceqLOatJ49JJE1VPTkMAy+oE1f1XvkMrYsHqmDf6GWVzgVXryL4U
wq2/nJSm56ddhN55nI8oSN3dtywUB8/ShelEN73nlN77PeD9tl6NksPwWaKrqxq0
FlabRPZSQCfmgZbhDV8Sa8mfCkFU0G0lit6kLGceCKMvmW+9Bz7ebsYmVdmVMxmf
IJStFD44lWFTdUc65WISKEdW2ELcUefb0zOLw+0PCbXFGJH5x5ktksW8+BBk2Hkg
GeQRL/qPCccthbScO0VgNj3zJ3ZZL0ObSDAbvNDG85joeNjDNq5DT/BAZ0bOSbEF
sh+f9BAzAgMBAAECggEBAJWv2cq7Jw6MVwSRxYca38xuD6TUNBopgBvjREixURW2
sNUaLuMb9Omp7fuOaE2N5rcJ+xnjPGIxh/oeN5MQctz9gwn3zf6vY+15h97pUb4D
uGvYPRDaT8YVGS+X9NMZ4ZCmqW2lpWzKnCFoGHcy8yZLbcaxBsRdvKzwOYGoPiFb
K2QuhXZ/1UPmqK9i2DFKtj40X6vBszTNboFxOVpXrPu0FJwLVSDf2hSZ4fMM0DH3
YqwKcYf5te+hxGKgrqRA3tn0NCWii0in6QIwXMC+kMw1ebg/tZKqyDLMNptAK8J+
DVw9m5X1seUHS5ehU/g2jrQrtK5WYn7MrFK4lBzlRwECgYEA/d1TeANYECDWRRDk
B0aaRZs87Rwl/J9PsvbsKvtU/bX+OfSOUjOa9iQBqn0LmU8GqusEET/QVUfocVwV
Bggf/5qDLxz100Rj0ags/yE/kNr0Bb31kkkKHFMnCT06YasR7qKllwrAlPJvQv9x
IzBKq+T/Dx08Wep9bCRSFhzRCnsCgYEA+jdeZXTDr/Vz+D2B3nAw1frqYFfGnEVY
wqmoK3VXMDkGuxsloO2rN+SyiUo3JNiQNPDub/t7175GH5pmKtZOlftePANsUjBj
wZ1D0rI5Bxu/71ibIUYIRVmXsTEQkh/ozoh3jXCZ9+bLgYiYx7789IUZZSokFQ3D
FICUT9KJ36kCgYAGoq9Y1rWJjmIrYfqj2guUQC+CfxbbGIrrwZqAsRsSmpwvhZ3m
tiSZxG0quKQB+NfSxdvQW5ulbwC7Xc3K35F+i9pb8+TVBdeaFkw+yu6vaZmxQLrX
fQM/pEjD7A7HmMIaO7QaU5SfEAsqdCTP56Y8AftMuNXn/8IRfo2KuGwaWwKBgFpU
ILzJoVdlad9E/Rw7LjYhZfkv1uBVXIyxyKcfrkEXZSmozDXDdxsvcZCEfVHM6Ipk
K/+7LuMcqp4AFEAEq8wTOdq6daFaHLkpt/FZK6M4TlruhtpFOPkoNc3e45eM83OT
6mziKINJC1CQ6m65sQHpBtjxlKMRG8rL/D6wx9s5AoGBAMRlqNPMwglT3hvDmsAt
9Lf9pdmhERUlHhD8bj8mDaBj2Aqv7f6VRJaYZqP403pKKQexuqcn80mtjkSAPFkN
Cj7BVt/RXm5uoxDTnfi26RF9F6yNDEJ7UU9+peBr99aazF/fTgW/1GcMkQnum8uV
c257YgaWmjK9uB0Y2r2VxS0G
-----END PRIVATE KEY-----

View file

@ -1,17 +0,0 @@
-----BEGIN CERTIFICATE-----
MIICujCCAaKgAwIBAgIJAMbXnKZ/cikUMA0GCSqGSIb3DQEBCwUAMBUxEzARBgNV
BAMTCnVidW50dS5uYW4wHhcNMTUwMjA0MDgwMTM5WhcNMjUwMjAxMDgwMTM5WjAV
MRMwEQYDVQQDEwp1YnVudHUubmFuMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEA+CELrALPDyXZxt5lEbfwF7YAvnHqizmrSePSSRNVT05DAMvqBNX9V75D
K2LB6pg3+hllc4FV68i+FMKtv5yUpuenXYTeeZyPKEjd3bcsFAfP0oXpRDe955Te
+z3g/bZejZLD8Fmiq6satBZWm0T2UkAn5oGW4Q1fEmvJnwpBVNBtJYrepCxnHgij
L5lvvQc+3m7GJlXZlTMZnyCUrRQ+OJVhU3VHOuViEihHVthC3FHn29Mzi8PtDwm1
xRiR+ceZLZLFvPgQZNh5IBnkES/6jwnHLYW0nDtFYDY98yd2WS9Dm0gwG7zQxvOY
6HjYwzauQ0/wQGdGzkmxBbIfn/QQMwIDAQABow0wCzAJBgNVHRMEAjAAMA0GCSqG
SIb3DQEBCwUAA4IBAQBQjKm/4KN/iTgXbLTL3i7zaxYXFLXsnT1tF+ay4VA8aj98
L3JwRTciZ3A5iy/W4VSCt3eASwOaPWHKqDBB5RTtL73LoAqsWmO3APOGQAbixcQ2
45GXi05OKeyiYRi1Nvq7Unv9jUkRDHUYVPZVSAjCpsXzPhFkmZoTRxmx5l0ZF7Li
K91lI5h+eFq0dwZwrmlPambyh1vQUi70VHv8DNToVU29kel7YLbxGbuqETfhrcy6
X+Mha6RYITkAn5FqsZcKMsc9eYGEF4l3XV+oS7q6xfTxktYJMFTI18J0lQ2Lv/CI
whdMnYGntDQBE/iFCrJEGNsKGc38796GBOb5j+zd
-----END CERTIFICATE-----

View file

@ -1,3 +0,0 @@
// Package stackless provides functionality that may save stack space
// for high number of concurrently running goroutines.
package stackless

View file

@ -1,79 +0,0 @@
package stackless
import (
"runtime"
"sync"
)
// NewFunc returns stackless wrapper for the function f.
//
// Unlike f, the returned stackless wrapper doesn't use stack space
// on the goroutine that calls it.
// The wrapper may save a lot of stack space if the following conditions
// are met:
//
// - f doesn't contain blocking calls on network, I/O or channels;
// - f uses a lot of stack space;
// - the wrapper is called from high number of concurrent goroutines.
//
// The stackless wrapper returns false if the call cannot be processed
// at the moment due to high load.
func NewFunc(f func(ctx interface{})) func(ctx interface{}) bool {
if f == nil {
panic("BUG: f cannot be nil")
}
funcWorkCh := make(chan *funcWork, runtime.GOMAXPROCS(-1)*2048)
onceInit := func() {
n := runtime.GOMAXPROCS(-1)
for i := 0; i < n; i++ {
go funcWorker(funcWorkCh, f)
}
}
var once sync.Once
return func(ctx interface{}) bool {
once.Do(onceInit)
fw := getFuncWork()
fw.ctx = ctx
select {
case funcWorkCh <- fw:
default:
putFuncWork(fw)
return false
}
<-fw.done
putFuncWork(fw)
return true
}
}
func funcWorker(funcWorkCh <-chan *funcWork, f func(ctx interface{})) {
for fw := range funcWorkCh {
f(fw.ctx)
fw.done <- struct{}{}
}
}
func getFuncWork() *funcWork {
v := funcWorkPool.Get()
if v == nil {
v = &funcWork{
done: make(chan struct{}, 1),
}
}
return v.(*funcWork)
}
func putFuncWork(fw *funcWork) {
fw.ctx = nil
funcWorkPool.Put(fw)
}
var funcWorkPool sync.Pool
type funcWork struct {
ctx interface{}
done chan struct{}
}

View file

@ -1,138 +0,0 @@
package stackless
import (
"errors"
"fmt"
"github.com/valyala/bytebufferpool"
"io"
)
// Writer is an interface stackless writer must conform to.
//
// The interface contains common subset for Writers from compress/* packages.
type Writer interface {
Write(p []byte) (int, error)
Flush() error
Close() error
Reset(w io.Writer)
}
// NewWriterFunc must return new writer that will be wrapped into
// stackless writer.
type NewWriterFunc func(w io.Writer) Writer
// NewWriter creates a stackless writer around a writer returned
// from newWriter.
//
// The returned writer writes data to dstW.
//
// Writers that use a lot of stack space may be wrapped into stackless writer,
// thus saving stack space for high number of concurrently running goroutines.
func NewWriter(dstW io.Writer, newWriter NewWriterFunc) Writer {
w := &writer{
dstW: dstW,
}
w.zw = newWriter(&w.xw)
return w
}
type writer struct {
dstW io.Writer
zw Writer
xw xWriter
err error
n int
p []byte
op op
}
type op int
const (
opWrite op = iota
opFlush
opClose
opReset
)
func (w *writer) Write(p []byte) (int, error) {
w.p = p
err := w.do(opWrite)
w.p = nil
return w.n, err
}
func (w *writer) Flush() error {
return w.do(opFlush)
}
func (w *writer) Close() error {
return w.do(opClose)
}
func (w *writer) Reset(dstW io.Writer) {
w.xw.Reset()
w.do(opReset)
w.dstW = dstW
}
func (w *writer) do(op op) error {
w.op = op
if !stacklessWriterFunc(w) {
return errHighLoad
}
err := w.err
if err != nil {
return err
}
if w.xw.bb != nil && len(w.xw.bb.B) > 0 {
_, err = w.dstW.Write(w.xw.bb.B)
}
w.xw.Reset()
return err
}
var errHighLoad = errors.New("cannot compress data due to high load")
var stacklessWriterFunc = NewFunc(writerFunc)
func writerFunc(ctx interface{}) {
w := ctx.(*writer)
switch w.op {
case opWrite:
w.n, w.err = w.zw.Write(w.p)
case opFlush:
w.err = w.zw.Flush()
case opClose:
w.err = w.zw.Close()
case opReset:
w.zw.Reset(&w.xw)
w.err = nil
default:
panic(fmt.Sprintf("BUG: unexpected op: %d", w.op))
}
}
type xWriter struct {
bb *bytebufferpool.ByteBuffer
}
func (w *xWriter) Write(p []byte) (int, error) {
if w.bb == nil {
w.bb = bufferPool.Get()
}
w.bb.Write(p)
return len(p), nil
}
func (w *xWriter) Reset() {
if w.bb != nil {
bufferPool.Put(w.bb)
w.bb = nil
}
}
var bufferPool bytebufferpool.Pool

View file

@ -1,176 +0,0 @@
package fasthttp
import (
"fmt"
"sync/atomic"
)
// HTTP status codes were stolen from net/http.
const (
StatusContinue = 100 // RFC 7231, 6.2.1
StatusSwitchingProtocols = 101 // RFC 7231, 6.2.2
StatusProcessing = 102 // RFC 2518, 10.1
StatusOK = 200 // RFC 7231, 6.3.1
StatusCreated = 201 // RFC 7231, 6.3.2
StatusAccepted = 202 // RFC 7231, 6.3.3
StatusNonAuthoritativeInfo = 203 // RFC 7231, 6.3.4
StatusNoContent = 204 // RFC 7231, 6.3.5
StatusResetContent = 205 // RFC 7231, 6.3.6
StatusPartialContent = 206 // RFC 7233, 4.1
StatusMultiStatus = 207 // RFC 4918, 11.1
StatusAlreadyReported = 208 // RFC 5842, 7.1
StatusIMUsed = 226 // RFC 3229, 10.4.1
StatusMultipleChoices = 300 // RFC 7231, 6.4.1
StatusMovedPermanently = 301 // RFC 7231, 6.4.2
StatusFound = 302 // RFC 7231, 6.4.3
StatusSeeOther = 303 // RFC 7231, 6.4.4
StatusNotModified = 304 // RFC 7232, 4.1
StatusUseProxy = 305 // RFC 7231, 6.4.5
_ = 306 // RFC 7231, 6.4.6 (Unused)
StatusTemporaryRedirect = 307 // RFC 7231, 6.4.7
StatusPermanentRedirect = 308 // RFC 7538, 3
StatusBadRequest = 400 // RFC 7231, 6.5.1
StatusUnauthorized = 401 // RFC 7235, 3.1
StatusPaymentRequired = 402 // RFC 7231, 6.5.2
StatusForbidden = 403 // RFC 7231, 6.5.3
StatusNotFound = 404 // RFC 7231, 6.5.4
StatusMethodNotAllowed = 405 // RFC 7231, 6.5.5
StatusNotAcceptable = 406 // RFC 7231, 6.5.6
StatusProxyAuthRequired = 407 // RFC 7235, 3.2
StatusRequestTimeout = 408 // RFC 7231, 6.5.7
StatusConflict = 409 // RFC 7231, 6.5.8
StatusGone = 410 // RFC 7231, 6.5.9
StatusLengthRequired = 411 // RFC 7231, 6.5.10
StatusPreconditionFailed = 412 // RFC 7232, 4.2
StatusRequestEntityTooLarge = 413 // RFC 7231, 6.5.11
StatusRequestURITooLong = 414 // RFC 7231, 6.5.12
StatusUnsupportedMediaType = 415 // RFC 7231, 6.5.13
StatusRequestedRangeNotSatisfiable = 416 // RFC 7233, 4.4
StatusExpectationFailed = 417 // RFC 7231, 6.5.14
StatusTeapot = 418 // RFC 7168, 2.3.3
StatusUnprocessableEntity = 422 // RFC 4918, 11.2
StatusLocked = 423 // RFC 4918, 11.3
StatusFailedDependency = 424 // RFC 4918, 11.4
StatusUpgradeRequired = 426 // RFC 7231, 6.5.15
StatusPreconditionRequired = 428 // RFC 6585, 3
StatusTooManyRequests = 429 // RFC 6585, 4
StatusRequestHeaderFieldsTooLarge = 431 // RFC 6585, 5
StatusUnavailableForLegalReasons = 451 // RFC 7725, 3
StatusInternalServerError = 500 // RFC 7231, 6.6.1
StatusNotImplemented = 501 // RFC 7231, 6.6.2
StatusBadGateway = 502 // RFC 7231, 6.6.3
StatusServiceUnavailable = 503 // RFC 7231, 6.6.4
StatusGatewayTimeout = 504 // RFC 7231, 6.6.5
StatusHTTPVersionNotSupported = 505 // RFC 7231, 6.6.6
StatusVariantAlsoNegotiates = 506 // RFC 2295, 8.1
StatusInsufficientStorage = 507 // RFC 4918, 11.5
StatusLoopDetected = 508 // RFC 5842, 7.2
StatusNotExtended = 510 // RFC 2774, 7
StatusNetworkAuthenticationRequired = 511 // RFC 6585, 6
)
var (
statusLines atomic.Value
statusMessages = map[int]string{
StatusContinue: "Continue",
StatusSwitchingProtocols: "Switching Protocols",
StatusProcessing: "Processing",
StatusOK: "OK",
StatusCreated: "Created",
StatusAccepted: "Accepted",
StatusNonAuthoritativeInfo: "Non-Authoritative Information",
StatusNoContent: "No Content",
StatusResetContent: "Reset Content",
StatusPartialContent: "Partial Content",
StatusMultiStatus: "Multi-Status",
StatusAlreadyReported: "Already Reported",
StatusIMUsed: "IM Used",
StatusMultipleChoices: "Multiple Choices",
StatusMovedPermanently: "Moved Permanently",
StatusFound: "Found",
StatusSeeOther: "See Other",
StatusNotModified: "Not Modified",
StatusUseProxy: "Use Proxy",
StatusTemporaryRedirect: "Temporary Redirect",
StatusPermanentRedirect: "Permanent Redirect",
StatusBadRequest: "Bad Request",
StatusUnauthorized: "Unauthorized",
StatusPaymentRequired: "Payment Required",
StatusForbidden: "Forbidden",
StatusNotFound: "Not Found",
StatusMethodNotAllowed: "Method Not Allowed",
StatusNotAcceptable: "Not Acceptable",
StatusProxyAuthRequired: "Proxy Authentication Required",
StatusRequestTimeout: "Request Timeout",
StatusConflict: "Conflict",
StatusGone: "Gone",
StatusLengthRequired: "Length Required",
StatusPreconditionFailed: "Precondition Failed",
StatusRequestEntityTooLarge: "Request Entity Too Large",
StatusRequestURITooLong: "Request URI Too Long",
StatusUnsupportedMediaType: "Unsupported Media Type",
StatusRequestedRangeNotSatisfiable: "Requested Range Not Satisfiable",
StatusExpectationFailed: "Expectation Failed",
StatusTeapot: "I'm a teapot",
StatusUnprocessableEntity: "Unprocessable Entity",
StatusLocked: "Locked",
StatusFailedDependency: "Failed Dependency",
StatusUpgradeRequired: "Upgrade Required",
StatusPreconditionRequired: "Precondition Required",
StatusTooManyRequests: "Too Many Requests",
StatusRequestHeaderFieldsTooLarge: "Request Header Fields Too Large",
StatusUnavailableForLegalReasons: "Unavailable For Legal Reasons",
StatusInternalServerError: "Internal Server Error",
StatusNotImplemented: "Not Implemented",
StatusBadGateway: "Bad Gateway",
StatusServiceUnavailable: "Service Unavailable",
StatusGatewayTimeout: "Gateway Timeout",
StatusHTTPVersionNotSupported: "HTTP Version Not Supported",
StatusVariantAlsoNegotiates: "Variant Also Negotiates",
StatusInsufficientStorage: "Insufficient Storage",
StatusLoopDetected: "Loop Detected",
StatusNotExtended: "Not Extended",
StatusNetworkAuthenticationRequired: "Network Authentication Required",
}
)
// StatusMessage returns HTTP status message for the given status code.
func StatusMessage(statusCode int) string {
s := statusMessages[statusCode]
if s == "" {
s = "Unknown Status Code"
}
return s
}
func init() {
statusLines.Store(make(map[int][]byte))
}
func statusLine(statusCode int) []byte {
m := statusLines.Load().(map[int][]byte)
h := m[statusCode]
if h != nil {
return h
}
statusText := StatusMessage(statusCode)
h = []byte(fmt.Sprintf("HTTP/1.1 %d %s\r\n", statusCode, statusText))
newM := make(map[int][]byte, len(m)+1)
for k, v := range m {
newM[k] = v
}
newM[statusCode] = h
statusLines.Store(newM)
return h
}

View file

@ -1,54 +0,0 @@
package fasthttp
import (
"bufio"
"io"
"sync"
"github.com/VictoriaMetrics/fasthttp/fasthttputil"
)
// StreamWriter must write data to w.
//
// Usually StreamWriter writes data to w in a loop (aka 'data streaming').
//
// StreamWriter must return immediately if w returns error.
//
// Since the written data is buffered, do not forget calling w.Flush
// when the data must be propagated to reader.
type StreamWriter func(w *bufio.Writer)
// NewStreamReader returns a reader, which replays all the data generated by sw.
//
// The returned reader may be passed to Response.SetBodyStream.
//
// Close must be called on the returned reader after all the required data
// has been read. Otherwise goroutine leak may occur.
//
// See also Response.SetBodyStreamWriter.
func NewStreamReader(sw StreamWriter) io.ReadCloser {
pc := fasthttputil.NewPipeConns()
pw := pc.Conn1()
pr := pc.Conn2()
var bw *bufio.Writer
v := streamWriterBufPool.Get()
if v == nil {
bw = bufio.NewWriter(pw)
} else {
bw = v.(*bufio.Writer)
bw.Reset(pw)
}
go func() {
sw(bw)
bw.Flush()
pw.Close()
streamWriterBufPool.Put(bw)
}()
return pr
}
var streamWriterBufPool sync.Pool

View file

@ -1,73 +0,0 @@
package fasthttp
var (
defaultServerName = []byte("fasthttp")
defaultUserAgent = []byte("fasthttp")
defaultContentType = []byte("text/plain; charset=utf-8")
)
var (
strSlash = []byte("/")
strSlashSlash = []byte("//")
strSlashDotDot = []byte("/..")
strSlashDotSlash = []byte("/./")
strSlashDotDotSlash = []byte("/../")
strCRLF = []byte("\r\n")
strHTTP = []byte("http")
strHTTPS = []byte("https")
strHTTP11 = []byte("HTTP/1.1")
strColonSlashSlash = []byte("://")
strColonSpace = []byte(": ")
strGMT = []byte("GMT")
strResponseContinue = []byte("HTTP/1.1 100 Continue\r\n\r\n")
strGet = []byte("GET")
strHead = []byte("HEAD")
strPost = []byte("POST")
strPut = []byte("PUT")
strDelete = []byte("DELETE")
strExpect = []byte("Expect")
strConnection = []byte("Connection")
strContentLength = []byte("Content-Length")
strContentType = []byte("Content-Type")
strDate = []byte("Date")
strHost = []byte("Host")
strReferer = []byte("Referer")
strServer = []byte("Server")
strTransferEncoding = []byte("Transfer-Encoding")
strContentEncoding = []byte("Content-Encoding")
strAcceptEncoding = []byte("Accept-Encoding")
strUserAgent = []byte("User-Agent")
strCookie = []byte("Cookie")
strSetCookie = []byte("Set-Cookie")
strLocation = []byte("Location")
strIfModifiedSince = []byte("If-Modified-Since")
strLastModified = []byte("Last-Modified")
strAcceptRanges = []byte("Accept-Ranges")
strRange = []byte("Range")
strContentRange = []byte("Content-Range")
strCookieExpires = []byte("expires")
strCookieDomain = []byte("domain")
strCookiePath = []byte("path")
strCookieHTTPOnly = []byte("HttpOnly")
strCookieSecure = []byte("secure")
strClose = []byte("close")
strGzip = []byte("gzip")
strDeflate = []byte("deflate")
strKeepAlive = []byte("keep-alive")
strKeepAliveCamelCase = []byte("Keep-Alive")
strUpgrade = []byte("Upgrade")
strChunked = []byte("chunked")
strIdentity = []byte("identity")
str100Continue = []byte("100-continue")
strPostArgsContentType = []byte("application/x-www-form-urlencoded")
strMultipartFormData = []byte("multipart/form-data")
strBoundary = []byte("boundary")
strBytes = []byte("bytes")
strTextSlash = []byte("text/")
strApplicationSlash = []byte("application/")
)

View file

@ -1,369 +0,0 @@
package fasthttp
import (
"errors"
"net"
"strconv"
"sync"
"sync/atomic"
"time"
)
// Dial dials the given TCP addr using tcp4.
//
// This function has the following additional features comparing to net.Dial:
//
// - It reduces load on DNS resolver by caching resolved TCP addressed
// for DefaultDNSCacheDuration.
// - It dials all the resolved TCP addresses in round-robin manner until
// connection is established. This may be useful if certain addresses
// are temporarily unreachable.
// - It returns ErrDialTimeout if connection cannot be established during
// DefaultDialTimeout seconds. Use DialTimeout for customizing dial timeout.
//
// This dialer is intended for custom code wrapping before passing
// to Client.Dial or HostClient.Dial.
//
// For instance, per-host counters and/or limits may be implemented
// by such wrappers.
//
// The addr passed to the function must contain port. Example addr values:
//
// - foobar.baz:443
// - foo.bar:80
// - aaa.com:8080
func Dial(addr string) (net.Conn, error) {
return getDialer(DefaultDialTimeout, false)(addr)
}
// DialTimeout dials the given TCP addr using tcp4 using the given timeout.
//
// This function has the following additional features comparing to net.Dial:
//
// - It reduces load on DNS resolver by caching resolved TCP addressed
// for DefaultDNSCacheDuration.
// - It dials all the resolved TCP addresses in round-robin manner until
// connection is established. This may be useful if certain addresses
// are temporarily unreachable.
//
// This dialer is intended for custom code wrapping before passing
// to Client.Dial or HostClient.Dial.
//
// For instance, per-host counters and/or limits may be implemented
// by such wrappers.
//
// The addr passed to the function must contain port. Example addr values:
//
// - foobar.baz:443
// - foo.bar:80
// - aaa.com:8080
func DialTimeout(addr string, timeout time.Duration) (net.Conn, error) {
return getDialer(timeout, false)(addr)
}
// DialDualStack dials the given TCP addr using both tcp4 and tcp6.
//
// This function has the following additional features comparing to net.Dial:
//
// - It reduces load on DNS resolver by caching resolved TCP addressed
// for DefaultDNSCacheDuration.
// - It dials all the resolved TCP addresses in round-robin manner until
// connection is established. This may be useful if certain addresses
// are temporarily unreachable.
// - It returns ErrDialTimeout if connection cannot be established during
// DefaultDialTimeout seconds. Use DialDualStackTimeout for custom dial
// timeout.
//
// This dialer is intended for custom code wrapping before passing
// to Client.Dial or HostClient.Dial.
//
// For instance, per-host counters and/or limits may be implemented
// by such wrappers.
//
// The addr passed to the function must contain port. Example addr values:
//
// - foobar.baz:443
// - foo.bar:80
// - aaa.com:8080
func DialDualStack(addr string) (net.Conn, error) {
return getDialer(DefaultDialTimeout, true)(addr)
}
// DialDualStackTimeout dials the given TCP addr using both tcp4 and tcp6
// using the given timeout.
//
// This function has the following additional features comparing to net.Dial:
//
// - It reduces load on DNS resolver by caching resolved TCP addressed
// for DefaultDNSCacheDuration.
// - It dials all the resolved TCP addresses in round-robin manner until
// connection is established. This may be useful if certain addresses
// are temporarily unreachable.
//
// This dialer is intended for custom code wrapping before passing
// to Client.Dial or HostClient.Dial.
//
// For instance, per-host counters and/or limits may be implemented
// by such wrappers.
//
// The addr passed to the function must contain port. Example addr values:
//
// - foobar.baz:443
// - foo.bar:80
// - aaa.com:8080
func DialDualStackTimeout(addr string, timeout time.Duration) (net.Conn, error) {
return getDialer(timeout, true)(addr)
}
func getDialer(timeout time.Duration, dualStack bool) DialFunc {
if timeout <= 0 {
timeout = DefaultDialTimeout
}
timeoutRounded := int(timeout.Seconds()*10 + 9)
m := dialMap
if dualStack {
m = dialDualStackMap
}
dialMapLock.Lock()
d := m[timeoutRounded]
if d == nil {
dialer := dialerStd
if dualStack {
dialer = dialerDualStack
}
d = dialer.NewDial(timeout)
m[timeoutRounded] = d
}
dialMapLock.Unlock()
return d
}
var (
dialerStd = &tcpDialer{}
dialerDualStack = &tcpDialer{DualStack: true}
dialMap = make(map[int]DialFunc)
dialDualStackMap = make(map[int]DialFunc)
dialMapLock sync.Mutex
)
type tcpDialer struct {
DualStack bool
tcpAddrsLock sync.Mutex
tcpAddrsMap map[string]*tcpAddrEntry
concurrencyCh chan struct{}
once sync.Once
}
const maxDialConcurrency = 1000
func (d *tcpDialer) NewDial(timeout time.Duration) DialFunc {
d.once.Do(func() {
d.concurrencyCh = make(chan struct{}, maxDialConcurrency)
d.tcpAddrsMap = make(map[string]*tcpAddrEntry)
go d.tcpAddrsClean()
})
return func(addr string) (net.Conn, error) {
addrs, idx, err := d.getTCPAddrs(addr)
if err != nil {
return nil, err
}
network := "tcp4"
if d.DualStack {
network = "tcp"
}
var conn net.Conn
n := uint32(len(addrs))
deadline := time.Now().Add(timeout)
for n > 0 {
conn, err = tryDial(network, &addrs[idx%n], deadline, d.concurrencyCh)
if err == nil {
return conn, nil
}
if err == ErrDialTimeout {
return nil, err
}
idx++
n--
}
return nil, err
}
}
func tryDial(network string, addr *net.TCPAddr, deadline time.Time, concurrencyCh chan struct{}) (net.Conn, error) {
timeout := -time.Since(deadline)
if timeout <= 0 {
return nil, ErrDialTimeout
}
select {
case concurrencyCh <- struct{}{}:
default:
tc := acquireTimer(timeout)
isTimeout := false
select {
case concurrencyCh <- struct{}{}:
case <-tc.C:
isTimeout = true
}
releaseTimer(tc)
if isTimeout {
return nil, ErrDialTimeout
}
}
timeout = -time.Since(deadline)
if timeout <= 0 {
<-concurrencyCh
return nil, ErrDialTimeout
}
chv := dialResultChanPool.Get()
if chv == nil {
chv = make(chan dialResult, 1)
}
ch := chv.(chan dialResult)
go func() {
var dr dialResult
dr.conn, dr.err = net.DialTCP(network, nil, addr)
ch <- dr
<-concurrencyCh
}()
var (
conn net.Conn
err error
)
tc := acquireTimer(timeout)
select {
case dr := <-ch:
conn = dr.conn
err = dr.err
dialResultChanPool.Put(ch)
case <-tc.C:
err = ErrDialTimeout
}
releaseTimer(tc)
return conn, err
}
var dialResultChanPool sync.Pool
type dialResult struct {
conn net.Conn
err error
}
// ErrDialTimeout is returned when TCP dialing is timed out.
var ErrDialTimeout = errors.New("dialing to the given TCP address timed out")
// DefaultDialTimeout is timeout used by Dial and DialDualStack
// for establishing TCP connections.
const DefaultDialTimeout = 3 * time.Second
type tcpAddrEntry struct {
addrs []net.TCPAddr
addrsIdx uint32
resolveTime time.Time
pending bool
}
// DefaultDNSCacheDuration is the duration for caching resolved TCP addresses
// by Dial* functions.
const DefaultDNSCacheDuration = time.Minute
func (d *tcpDialer) tcpAddrsClean() {
expireDuration := 2 * DefaultDNSCacheDuration
for {
time.Sleep(time.Second)
t := time.Now()
d.tcpAddrsLock.Lock()
for k, e := range d.tcpAddrsMap {
if t.Sub(e.resolveTime) > expireDuration {
delete(d.tcpAddrsMap, k)
}
}
d.tcpAddrsLock.Unlock()
}
}
func (d *tcpDialer) getTCPAddrs(addr string) ([]net.TCPAddr, uint32, error) {
d.tcpAddrsLock.Lock()
e := d.tcpAddrsMap[addr]
if e != nil && !e.pending && time.Since(e.resolveTime) > DefaultDNSCacheDuration {
e.pending = true
e = nil
}
d.tcpAddrsLock.Unlock()
if e == nil {
addrs, err := resolveTCPAddrs(addr, d.DualStack)
if err != nil {
d.tcpAddrsLock.Lock()
e = d.tcpAddrsMap[addr]
if e != nil && e.pending {
e.pending = false
}
d.tcpAddrsLock.Unlock()
return nil, 0, err
}
e = &tcpAddrEntry{
addrs: addrs,
resolveTime: time.Now(),
}
d.tcpAddrsLock.Lock()
d.tcpAddrsMap[addr] = e
d.tcpAddrsLock.Unlock()
}
idx := atomic.AddUint32(&e.addrsIdx, 1)
return e.addrs, idx, nil
}
func resolveTCPAddrs(addr string, dualStack bool) ([]net.TCPAddr, error) {
host, portS, err := net.SplitHostPort(addr)
if err != nil {
return nil, err
}
port, err := strconv.Atoi(portS)
if err != nil {
return nil, err
}
ips, err := net.LookupIP(host)
if err != nil {
return nil, err
}
n := len(ips)
addrs := make([]net.TCPAddr, 0, n)
for i := 0; i < n; i++ {
ip := ips[i]
if !dualStack && ip.To4() == nil {
continue
}
addrs = append(addrs, net.TCPAddr{
IP: ip,
Port: port,
})
}
if len(addrs) == 0 {
return nil, errNoDNSEntries
}
return addrs, nil
}
var errNoDNSEntries = errors.New("couldn't find DNS entries for the given domain. Try using DialDualStack")

View file

@ -1,44 +0,0 @@
package fasthttp
import (
"sync"
"time"
)
func initTimer(t *time.Timer, timeout time.Duration) *time.Timer {
if t == nil {
return time.NewTimer(timeout)
}
if t.Reset(timeout) {
panic("BUG: active timer trapped into initTimer()")
}
return t
}
func stopTimer(t *time.Timer) {
if !t.Stop() {
// Collect possibly added time from the channel
// if timer has been stopped and nobody collected its' value.
select {
case <-t.C:
default:
}
}
}
func acquireTimer(timeout time.Duration) *time.Timer {
v := timerPool.Get()
if v == nil {
return time.NewTimer(timeout)
}
t := v.(*time.Timer)
initTimer(t, timeout)
return t
}
func releaseTimer(t *time.Timer) {
stopTimer(t)
timerPool.Put(t)
}
var timerPool sync.Pool

View file

@ -1,525 +0,0 @@
package fasthttp
import (
"bytes"
"io"
"sync"
)
// AcquireURI returns an empty URI instance from the pool.
//
// Release the URI with ReleaseURI after the URI is no longer needed.
// This allows reducing GC load.
func AcquireURI() *URI {
return uriPool.Get().(*URI)
}
// ReleaseURI releases the URI acquired via AcquireURI.
//
// The released URI mustn't be used after releasing it, otherwise data races
// may occur.
func ReleaseURI(u *URI) {
u.Reset()
uriPool.Put(u)
}
var uriPool = &sync.Pool{
New: func() interface{} {
return &URI{}
},
}
// URI represents URI :) .
//
// It is forbidden copying URI instances. Create new instance and use CopyTo
// instead.
//
// URI instance MUST NOT be used from concurrently running goroutines.
type URI struct {
noCopy noCopy
pathOriginal []byte
scheme []byte
path []byte
queryString []byte
hash []byte
host []byte
queryArgs Args
parsedQueryArgs bool
fullURI []byte
requestURI []byte
h *RequestHeader
}
// CopyTo copies uri contents to dst.
func (u *URI) CopyTo(dst *URI) {
dst.Reset()
dst.pathOriginal = append(dst.pathOriginal[:0], u.pathOriginal...)
dst.scheme = append(dst.scheme[:0], u.scheme...)
dst.path = append(dst.path[:0], u.path...)
dst.queryString = append(dst.queryString[:0], u.queryString...)
dst.hash = append(dst.hash[:0], u.hash...)
dst.host = append(dst.host[:0], u.host...)
u.queryArgs.CopyTo(&dst.queryArgs)
dst.parsedQueryArgs = u.parsedQueryArgs
// fullURI and requestURI shouldn't be copied, since they are created
// from scratch on each FullURI() and RequestURI() call.
dst.h = u.h
}
// Hash returns URI hash, i.e. qwe of http://aaa.com/foo/bar?baz=123#qwe .
//
// The returned value is valid until the next URI method call.
func (u *URI) Hash() []byte {
return u.hash
}
// SetHash sets URI hash.
func (u *URI) SetHash(hash string) {
u.hash = append(u.hash[:0], hash...)
}
// SetHashBytes sets URI hash.
func (u *URI) SetHashBytes(hash []byte) {
u.hash = append(u.hash[:0], hash...)
}
// QueryString returns URI query string,
// i.e. baz=123 of http://aaa.com/foo/bar?baz=123#qwe .
//
// The returned value is valid until the next URI method call.
func (u *URI) QueryString() []byte {
return u.queryString
}
// SetQueryString sets URI query string.
func (u *URI) SetQueryString(queryString string) {
u.queryString = append(u.queryString[:0], queryString...)
u.parsedQueryArgs = false
}
// SetQueryStringBytes sets URI query string.
func (u *URI) SetQueryStringBytes(queryString []byte) {
u.queryString = append(u.queryString[:0], queryString...)
u.parsedQueryArgs = false
}
// Path returns URI path, i.e. /foo/bar of http://aaa.com/foo/bar?baz=123#qwe .
//
// The returned path is always urldecoded and normalized,
// i.e. '//f%20obar/baz/../zzz' becomes '/f obar/zzz'.
//
// The returned value is valid until the next URI method call.
func (u *URI) Path() []byte {
path := u.path
if len(path) == 0 {
path = strSlash
}
return path
}
// SetPath sets URI path.
func (u *URI) SetPath(path string) {
u.pathOriginal = append(u.pathOriginal[:0], path...)
u.path = normalizePath(u.path, u.pathOriginal)
}
// SetPathBytes sets URI path.
func (u *URI) SetPathBytes(path []byte) {
u.pathOriginal = append(u.pathOriginal[:0], path...)
u.path = normalizePath(u.path, u.pathOriginal)
}
// PathOriginal returns the original path from requestURI passed to URI.Parse().
//
// The returned value is valid until the next URI method call.
func (u *URI) PathOriginal() []byte {
return u.pathOriginal
}
// Scheme returns URI scheme, i.e. http of http://aaa.com/foo/bar?baz=123#qwe .
//
// Returned scheme is always lowercased.
//
// The returned value is valid until the next URI method call.
func (u *URI) Scheme() []byte {
scheme := u.scheme
if len(scheme) == 0 {
scheme = strHTTP
}
return scheme
}
// SetScheme sets URI scheme, i.e. http, https, ftp, etc.
func (u *URI) SetScheme(scheme string) {
u.scheme = append(u.scheme[:0], scheme...)
lowercaseBytes(u.scheme)
}
// SetSchemeBytes sets URI scheme, i.e. http, https, ftp, etc.
func (u *URI) SetSchemeBytes(scheme []byte) {
u.scheme = append(u.scheme[:0], scheme...)
lowercaseBytes(u.scheme)
}
// Reset clears uri.
func (u *URI) Reset() {
u.pathOriginal = u.pathOriginal[:0]
u.scheme = u.scheme[:0]
u.path = u.path[:0]
u.queryString = u.queryString[:0]
u.hash = u.hash[:0]
u.host = u.host[:0]
u.queryArgs.Reset()
u.parsedQueryArgs = false
// There is no need in u.fullURI = u.fullURI[:0], since full uri
// is calucalted on each call to FullURI().
// There is no need in u.requestURI = u.requestURI[:0], since requestURI
// is calculated on each call to RequestURI().
u.h = nil
}
// Host returns host part, i.e. aaa.com of http://aaa.com/foo/bar?baz=123#qwe .
//
// Host is always lowercased.
func (u *URI) Host() []byte {
if len(u.host) == 0 && u.h != nil {
u.host = append(u.host[:0], u.h.Host()...)
lowercaseBytes(u.host)
u.h = nil
}
return u.host
}
// SetHost sets host for the uri.
func (u *URI) SetHost(host string) {
u.host = append(u.host[:0], host...)
lowercaseBytes(u.host)
}
// SetHostBytes sets host for the uri.
func (u *URI) SetHostBytes(host []byte) {
u.host = append(u.host[:0], host...)
lowercaseBytes(u.host)
}
// Parse initializes URI from the given host and uri.
//
// host may be nil. In this case uri must contain fully qualified uri,
// i.e. with scheme and host. http is assumed if scheme is omitted.
//
// uri may contain e.g. RequestURI without scheme and host if host is non-empty.
func (u *URI) Parse(host, uri []byte) {
u.parse(host, uri, nil)
}
func (u *URI) parseQuick(uri []byte, h *RequestHeader, isTLS bool) {
u.parse(nil, uri, h)
if isTLS {
u.scheme = append(u.scheme[:0], strHTTPS...)
}
}
func (u *URI) parse(host, uri []byte, h *RequestHeader) {
u.Reset()
u.h = h
scheme, host, uri := splitHostURI(host, uri)
u.scheme = append(u.scheme, scheme...)
lowercaseBytes(u.scheme)
u.host = append(u.host, host...)
lowercaseBytes(u.host)
b := uri
queryIndex := bytes.IndexByte(b, '?')
fragmentIndex := bytes.IndexByte(b, '#')
// Ignore query in fragment part
if fragmentIndex >= 0 && queryIndex > fragmentIndex {
queryIndex = -1
}
if queryIndex < 0 && fragmentIndex < 0 {
u.pathOriginal = append(u.pathOriginal, b...)
u.path = normalizePath(u.path, u.pathOriginal)
return
}
if queryIndex >= 0 {
// Path is everything up to the start of the query
u.pathOriginal = append(u.pathOriginal, b[:queryIndex]...)
u.path = normalizePath(u.path, u.pathOriginal)
if fragmentIndex < 0 {
u.queryString = append(u.queryString, b[queryIndex+1:]...)
} else {
u.queryString = append(u.queryString, b[queryIndex+1:fragmentIndex]...)
u.hash = append(u.hash, b[fragmentIndex+1:]...)
}
return
}
// fragmentIndex >= 0 && queryIndex < 0
// Path is up to the start of fragment
u.pathOriginal = append(u.pathOriginal, b[:fragmentIndex]...)
u.path = normalizePath(u.path, u.pathOriginal)
u.hash = append(u.hash, b[fragmentIndex+1:]...)
}
func normalizePath(dst, src []byte) []byte {
dst = dst[:0]
dst = addLeadingSlash(dst, src)
dst = decodeArgAppendNoPlus(dst, src)
// remove duplicate slashes
b := dst
bSize := len(b)
for {
n := bytes.Index(b, strSlashSlash)
if n < 0 {
break
}
b = b[n:]
copy(b, b[1:])
b = b[:len(b)-1]
bSize--
}
dst = dst[:bSize]
// remove /./ parts
b = dst
for {
n := bytes.Index(b, strSlashDotSlash)
if n < 0 {
break
}
nn := n + len(strSlashDotSlash) - 1
copy(b[n:], b[nn:])
b = b[:len(b)-nn+n]
}
// remove /foo/../ parts
for {
n := bytes.Index(b, strSlashDotDotSlash)
if n < 0 {
break
}
nn := bytes.LastIndexByte(b[:n], '/')
if nn < 0 {
nn = 0
}
n += len(strSlashDotDotSlash) - 1
copy(b[nn:], b[n:])
b = b[:len(b)-n+nn]
}
// remove trailing /foo/..
n := bytes.LastIndex(b, strSlashDotDot)
if n >= 0 && n+len(strSlashDotDot) == len(b) {
nn := bytes.LastIndexByte(b[:n], '/')
if nn < 0 {
return strSlash
}
b = b[:nn+1]
}
return b
}
// RequestURI returns RequestURI - i.e. URI without Scheme and Host.
func (u *URI) RequestURI() []byte {
dst := appendQuotedPath(u.requestURI[:0], u.Path())
if u.queryArgs.Len() > 0 {
dst = append(dst, '?')
dst = u.queryArgs.AppendBytes(dst)
} else if len(u.queryString) > 0 {
dst = append(dst, '?')
dst = append(dst, u.queryString...)
}
if len(u.hash) > 0 {
dst = append(dst, '#')
dst = append(dst, u.hash...)
}
u.requestURI = dst
return u.requestURI
}
// LastPathSegment returns the last part of uri path after '/'.
//
// Examples:
//
// - For /foo/bar/baz.html path returns baz.html.
// - For /foo/bar/ returns empty byte slice.
// - For /foobar.js returns foobar.js.
func (u *URI) LastPathSegment() []byte {
path := u.Path()
n := bytes.LastIndexByte(path, '/')
if n < 0 {
return path
}
return path[n+1:]
}
// Update updates uri.
//
// The following newURI types are accepted:
//
// - Absolute, i.e. http://foobar.com/aaa/bb?cc . In this case the original
// uri is replaced by newURI.
// - Absolute without scheme, i.e. //foobar.com/aaa/bb?cc. In this case
// the original scheme is preserved.
// - Missing host, i.e. /aaa/bb?cc . In this case only RequestURI part
// of the original uri is replaced.
// - Relative path, i.e. xx?yy=abc . In this case the original RequestURI
// is updated according to the new relative path.
func (u *URI) Update(newURI string) {
u.UpdateBytes(s2b(newURI))
}
// UpdateBytes updates uri.
//
// The following newURI types are accepted:
//
// - Absolute, i.e. http://foobar.com/aaa/bb?cc . In this case the original
// uri is replaced by newURI.
// - Absolute without scheme, i.e. //foobar.com/aaa/bb?cc. In this case
// the original scheme is preserved.
// - Missing host, i.e. /aaa/bb?cc . In this case only RequestURI part
// of the original uri is replaced.
// - Relative path, i.e. xx?yy=abc . In this case the original RequestURI
// is updated according to the new relative path.
func (u *URI) UpdateBytes(newURI []byte) {
u.requestURI = u.updateBytes(newURI, u.requestURI)
}
func (u *URI) updateBytes(newURI, buf []byte) []byte {
if len(newURI) == 0 {
return buf
}
n := bytes.Index(newURI, strSlashSlash)
if n >= 0 {
// absolute uri
var b [32]byte
schemeOriginal := b[:0]
if len(u.scheme) > 0 {
schemeOriginal = append([]byte(nil), u.scheme...)
}
u.Parse(nil, newURI)
if len(schemeOriginal) > 0 && len(u.scheme) == 0 {
u.scheme = append(u.scheme[:0], schemeOriginal...)
}
return buf
}
if newURI[0] == '/' {
// uri without host
buf = u.appendSchemeHost(buf[:0])
buf = append(buf, newURI...)
u.Parse(nil, buf)
return buf
}
// relative path
switch newURI[0] {
case '?':
// query string only update
u.SetQueryStringBytes(newURI[1:])
return append(buf[:0], u.FullURI()...)
case '#':
// update only hash
u.SetHashBytes(newURI[1:])
return append(buf[:0], u.FullURI()...)
default:
// update the last path part after the slash
path := u.Path()
n = bytes.LastIndexByte(path, '/')
if n < 0 {
panic("BUG: path must contain at least one slash")
}
buf = u.appendSchemeHost(buf[:0])
buf = appendQuotedPath(buf, path[:n+1])
buf = append(buf, newURI...)
u.Parse(nil, buf)
return buf
}
}
// FullURI returns full uri in the form {Scheme}://{Host}{RequestURI}#{Hash}.
func (u *URI) FullURI() []byte {
u.fullURI = u.AppendBytes(u.fullURI[:0])
return u.fullURI
}
// AppendBytes appends full uri to dst and returns the extended dst.
func (u *URI) AppendBytes(dst []byte) []byte {
dst = u.appendSchemeHost(dst)
return append(dst, u.RequestURI()...)
}
func (u *URI) appendSchemeHost(dst []byte) []byte {
dst = append(dst, u.Scheme()...)
dst = append(dst, strColonSlashSlash...)
return append(dst, u.Host()...)
}
// WriteTo writes full uri to w.
//
// WriteTo implements io.WriterTo interface.
func (u *URI) WriteTo(w io.Writer) (int64, error) {
n, err := w.Write(u.FullURI())
return int64(n), err
}
// String returns full uri.
func (u *URI) String() string {
return string(u.FullURI())
}
func splitHostURI(host, uri []byte) ([]byte, []byte, []byte) {
n := bytes.Index(uri, strSlashSlash)
if n < 0 {
return strHTTP, host, uri
}
scheme := uri[:n]
if bytes.IndexByte(scheme, '/') >= 0 {
return strHTTP, host, uri
}
if len(scheme) > 0 && scheme[len(scheme)-1] == ':' {
scheme = scheme[:len(scheme)-1]
}
n += len(strSlashSlash)
uri = uri[n:]
n = bytes.IndexByte(uri, '/')
if n < 0 {
// A hack for bogus urls like foobar.com?a=b without
// slash after host.
if n = bytes.IndexByte(uri, '?'); n >= 0 {
return scheme, uri[:n], uri[n:]
}
return scheme, uri, strSlash
}
return scheme, uri[:n], uri[n:]
}
// QueryArgs returns query args.
func (u *URI) QueryArgs() *Args {
u.parseQueryArgs()
return &u.queryArgs
}
func (u *URI) parseQueryArgs() {
if u.parsedQueryArgs {
return
}
u.queryArgs.ParseBytes(u.queryString)
u.parsedQueryArgs = true
}

View file

@ -1,13 +0,0 @@
//go:build !windows
// +build !windows
package fasthttp
func addLeadingSlash(dst, src []byte) []byte {
// add leading slash for unix paths
if len(src) == 0 || src[0] != '/' {
dst = append(dst, '/')
}
return dst
}

View file

@ -1,13 +0,0 @@
//go:build windows
// +build windows
package fasthttp
func addLeadingSlash(dst, src []byte) []byte {
// zero length and "C:/" case
if len(src) == 0 || (len(src) > 2 && src[1] != ':') {
dst = append(dst, '/')
}
return dst
}

View file

@ -1,71 +0,0 @@
package fasthttp
import (
"io"
)
type userDataKV struct {
key []byte
value interface{}
}
type userData []userDataKV
func (d *userData) Set(key string, value interface{}) {
args := *d
n := len(args)
for i := 0; i < n; i++ {
kv := &args[i]
if string(kv.key) == key {
kv.value = value
return
}
}
c := cap(args)
if c > n {
args = args[:n+1]
kv := &args[n]
kv.key = append(kv.key[:0], key...)
kv.value = value
*d = args
return
}
kv := userDataKV{}
kv.key = append(kv.key[:0], key...)
kv.value = value
*d = append(args, kv)
}
func (d *userData) SetBytes(key []byte, value interface{}) {
d.Set(b2s(key), value)
}
func (d *userData) Get(key string) interface{} {
args := *d
n := len(args)
for i := 0; i < n; i++ {
kv := &args[i]
if string(kv.key) == key {
return kv.value
}
}
return nil
}
func (d *userData) GetBytes(key []byte) interface{} {
return d.Get(b2s(key))
}
func (d *userData) Reset() {
args := *d
n := len(args)
for i := 0; i < n; i++ {
v := args[i].value
if vc, ok := v.(io.Closer); ok {
vc.Close()
}
}
*d = (*d)[:0]
}

View file

@ -1,231 +0,0 @@
package fasthttp
import (
"net"
"runtime"
"strings"
"sync"
"time"
)
// workerPool serves incoming connections via a pool of workers
// in FILO order, i.e. the most recently stopped worker will serve the next
// incoming connection.
//
// Such a scheme keeps CPU caches hot (in theory).
type workerPool struct {
// Function for serving server connections.
// It must leave c unclosed.
WorkerFunc func(c net.Conn) error
MaxWorkersCount int
LogAllErrors bool
MaxIdleWorkerDuration time.Duration
Logger Logger
lock sync.Mutex
workersCount int
mustStop bool
ready []*workerChan
stopCh chan struct{}
workerChanPool sync.Pool
}
type workerChan struct {
lastUseTime time.Time
ch chan net.Conn
}
func (wp *workerPool) Start() {
if wp.stopCh != nil {
panic("BUG: workerPool already started")
}
wp.stopCh = make(chan struct{})
stopCh := wp.stopCh
go func() {
var scratch []*workerChan
for {
wp.clean(&scratch)
select {
case <-stopCh:
return
default:
time.Sleep(wp.getMaxIdleWorkerDuration())
}
}
}()
}
func (wp *workerPool) Stop() {
if wp.stopCh == nil {
panic("BUG: workerPool wasn't started")
}
close(wp.stopCh)
wp.stopCh = nil
// Stop all the workers waiting for incoming connections.
// Do not wait for busy workers - they will stop after
// serving the connection and noticing wp.mustStop = true.
wp.lock.Lock()
ready := wp.ready
for i, ch := range ready {
ch.ch <- nil
ready[i] = nil
}
wp.ready = ready[:0]
wp.mustStop = true
wp.lock.Unlock()
}
func (wp *workerPool) getMaxIdleWorkerDuration() time.Duration {
if wp.MaxIdleWorkerDuration <= 0 {
return 10 * time.Second
}
return wp.MaxIdleWorkerDuration
}
func (wp *workerPool) clean(scratch *[]*workerChan) {
maxIdleWorkerDuration := wp.getMaxIdleWorkerDuration()
// Clean least recently used workers if they didn't serve connections
// for more than maxIdleWorkerDuration.
currentTime := time.Now()
wp.lock.Lock()
ready := wp.ready
n := len(ready)
i := 0
for i < n && currentTime.Sub(ready[i].lastUseTime) > maxIdleWorkerDuration {
i++
}
*scratch = append((*scratch)[:0], ready[:i]...)
if i > 0 {
m := copy(ready, ready[i:])
for i = m; i < n; i++ {
ready[i] = nil
}
wp.ready = ready[:m]
}
wp.lock.Unlock()
// Notify obsolete workers to stop.
// This notification must be outside the wp.lock, since ch.ch
// may be blocking and may consume a lot of time if many workers
// are located on non-local CPUs.
tmp := *scratch
for i, ch := range tmp {
ch.ch <- nil
tmp[i] = nil
}
}
func (wp *workerPool) Serve(c net.Conn) bool {
ch := wp.getCh()
if ch == nil {
return false
}
ch.ch <- c
return true
}
var workerChanCap = func() int {
// Use blocking workerChan if GOMAXPROCS=1.
// This immediately switches Serve to WorkerFunc, which results
// in higher performance (under go1.5 at least).
if runtime.GOMAXPROCS(0) == 1 {
return 0
}
// Use non-blocking workerChan if GOMAXPROCS>1,
// since otherwise the Serve caller (Acceptor) may lag accepting
// new connections if WorkerFunc is CPU-bound.
return 1
}()
func (wp *workerPool) getCh() *workerChan {
var ch *workerChan
createWorker := false
wp.lock.Lock()
ready := wp.ready
n := len(ready) - 1
if n < 0 {
if wp.workersCount < wp.MaxWorkersCount {
createWorker = true
wp.workersCount++
}
} else {
ch = ready[n]
ready[n] = nil
wp.ready = ready[:n]
}
wp.lock.Unlock()
if ch == nil {
if !createWorker {
return nil
}
vch := wp.workerChanPool.Get()
if vch == nil {
vch = &workerChan{
ch: make(chan net.Conn, workerChanCap),
}
}
ch = vch.(*workerChan)
go func() {
wp.workerFunc(ch)
wp.workerChanPool.Put(vch)
}()
}
return ch
}
func (wp *workerPool) release(ch *workerChan) bool {
ch.lastUseTime = time.Now()
wp.lock.Lock()
if wp.mustStop {
wp.lock.Unlock()
return false
}
wp.ready = append(wp.ready, ch)
wp.lock.Unlock()
return true
}
func (wp *workerPool) workerFunc(ch *workerChan) {
var c net.Conn
var err error
for c = range ch.ch {
if c == nil {
break
}
if err = wp.WorkerFunc(c); err != nil && err != errHijacked {
errStr := err.Error()
if wp.LogAllErrors || !(strings.Contains(errStr, "broken pipe") ||
strings.Contains(errStr, "reset by peer") ||
strings.Contains(errStr, "i/o timeout")) {
wp.Logger.Printf("error when serving connection %q<->%q: %s", c.LocalAddr(), c.RemoteAddr(), err)
}
}
if err != errHijacked {
c.Close()
}
c = nil
if !wp.release(ch) {
break
}
}
wp.lock.Lock()
wp.workersCount--
wp.lock.Unlock()
}

View file

@ -1,168 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package socks
import (
"context"
"errors"
"io"
"net"
"strconv"
"time"
)
var (
noDeadline = time.Time{}
aLongTimeAgo = time.Unix(1, 0)
)
func (d *Dialer) connect(ctx context.Context, c net.Conn, address string) (_ net.Addr, ctxErr error) {
host, port, err := splitHostPort(address)
if err != nil {
return nil, err
}
if deadline, ok := ctx.Deadline(); ok && !deadline.IsZero() {
c.SetDeadline(deadline)
defer c.SetDeadline(noDeadline)
}
if ctx != context.Background() {
errCh := make(chan error, 1)
done := make(chan struct{})
defer func() {
close(done)
if ctxErr == nil {
ctxErr = <-errCh
}
}()
go func() {
select {
case <-ctx.Done():
c.SetDeadline(aLongTimeAgo)
errCh <- ctx.Err()
case <-done:
errCh <- nil
}
}()
}
b := make([]byte, 0, 6+len(host)) // the size here is just an estimate
b = append(b, Version5)
if len(d.AuthMethods) == 0 || d.Authenticate == nil {
b = append(b, 1, byte(AuthMethodNotRequired))
} else {
ams := d.AuthMethods
if len(ams) > 255 {
return nil, errors.New("too many authentication methods")
}
b = append(b, byte(len(ams)))
for _, am := range ams {
b = append(b, byte(am))
}
}
if _, ctxErr = c.Write(b); ctxErr != nil {
return
}
if _, ctxErr = io.ReadFull(c, b[:2]); ctxErr != nil {
return
}
if b[0] != Version5 {
return nil, errors.New("unexpected protocol version " + strconv.Itoa(int(b[0])))
}
am := AuthMethod(b[1])
if am == AuthMethodNoAcceptableMethods {
return nil, errors.New("no acceptable authentication methods")
}
if d.Authenticate != nil {
if ctxErr = d.Authenticate(ctx, c, am); ctxErr != nil {
return
}
}
b = b[:0]
b = append(b, Version5, byte(d.cmd), 0)
if ip := net.ParseIP(host); ip != nil {
if ip4 := ip.To4(); ip4 != nil {
b = append(b, AddrTypeIPv4)
b = append(b, ip4...)
} else if ip6 := ip.To16(); ip6 != nil {
b = append(b, AddrTypeIPv6)
b = append(b, ip6...)
} else {
return nil, errors.New("unknown address type")
}
} else {
if len(host) > 255 {
return nil, errors.New("FQDN too long")
}
b = append(b, AddrTypeFQDN)
b = append(b, byte(len(host)))
b = append(b, host...)
}
b = append(b, byte(port>>8), byte(port))
if _, ctxErr = c.Write(b); ctxErr != nil {
return
}
if _, ctxErr = io.ReadFull(c, b[:4]); ctxErr != nil {
return
}
if b[0] != Version5 {
return nil, errors.New("unexpected protocol version " + strconv.Itoa(int(b[0])))
}
if cmdErr := Reply(b[1]); cmdErr != StatusSucceeded {
return nil, errors.New("unknown error " + cmdErr.String())
}
if b[2] != 0 {
return nil, errors.New("non-zero reserved field")
}
l := 2
var a Addr
switch b[3] {
case AddrTypeIPv4:
l += net.IPv4len
a.IP = make(net.IP, net.IPv4len)
case AddrTypeIPv6:
l += net.IPv6len
a.IP = make(net.IP, net.IPv6len)
case AddrTypeFQDN:
if _, err := io.ReadFull(c, b[:1]); err != nil {
return nil, err
}
l += int(b[0])
default:
return nil, errors.New("unknown address type " + strconv.Itoa(int(b[3])))
}
if cap(b) < l {
b = make([]byte, l)
} else {
b = b[:l]
}
if _, ctxErr = io.ReadFull(c, b); ctxErr != nil {
return
}
if a.IP != nil {
copy(a.IP, b)
} else {
a.Name = string(b[:len(b)-2])
}
a.Port = int(b[len(b)-2])<<8 | int(b[len(b)-1])
return &a, nil
}
func splitHostPort(address string) (string, int, error) {
host, port, err := net.SplitHostPort(address)
if err != nil {
return "", 0, err
}
portnum, err := strconv.Atoi(port)
if err != nil {
return "", 0, err
}
if 1 > portnum || portnum > 0xffff {
return "", 0, errors.New("port number out of range " + port)
}
return host, portnum, nil
}

View file

@ -1,317 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package socks provides a SOCKS version 5 client implementation.
//
// SOCKS protocol version 5 is defined in RFC 1928.
// Username/Password authentication for SOCKS version 5 is defined in
// RFC 1929.
package socks
import (
"context"
"errors"
"io"
"net"
"strconv"
)
// A Command represents a SOCKS command.
type Command int
func (cmd Command) String() string {
switch cmd {
case CmdConnect:
return "socks connect"
case cmdBind:
return "socks bind"
default:
return "socks " + strconv.Itoa(int(cmd))
}
}
// An AuthMethod represents a SOCKS authentication method.
type AuthMethod int
// A Reply represents a SOCKS command reply code.
type Reply int
func (code Reply) String() string {
switch code {
case StatusSucceeded:
return "succeeded"
case 0x01:
return "general SOCKS server failure"
case 0x02:
return "connection not allowed by ruleset"
case 0x03:
return "network unreachable"
case 0x04:
return "host unreachable"
case 0x05:
return "connection refused"
case 0x06:
return "TTL expired"
case 0x07:
return "command not supported"
case 0x08:
return "address type not supported"
default:
return "unknown code: " + strconv.Itoa(int(code))
}
}
// Wire protocol constants.
const (
Version5 = 0x05
AddrTypeIPv4 = 0x01
AddrTypeFQDN = 0x03
AddrTypeIPv6 = 0x04
CmdConnect Command = 0x01 // establishes an active-open forward proxy connection
cmdBind Command = 0x02 // establishes a passive-open forward proxy connection
AuthMethodNotRequired AuthMethod = 0x00 // no authentication required
AuthMethodUsernamePassword AuthMethod = 0x02 // use username/password
AuthMethodNoAcceptableMethods AuthMethod = 0xff // no acceptable authentication methods
StatusSucceeded Reply = 0x00
)
// An Addr represents a SOCKS-specific address.
// Either Name or IP is used exclusively.
type Addr struct {
Name string // fully-qualified domain name
IP net.IP
Port int
}
func (a *Addr) Network() string { return "socks" }
func (a *Addr) String() string {
if a == nil {
return "<nil>"
}
port := strconv.Itoa(a.Port)
if a.IP == nil {
return net.JoinHostPort(a.Name, port)
}
return net.JoinHostPort(a.IP.String(), port)
}
// A Conn represents a forward proxy connection.
type Conn struct {
net.Conn
boundAddr net.Addr
}
// BoundAddr returns the address assigned by the proxy server for
// connecting to the command target address from the proxy server.
func (c *Conn) BoundAddr() net.Addr {
if c == nil {
return nil
}
return c.boundAddr
}
// A Dialer holds SOCKS-specific options.
type Dialer struct {
cmd Command // either CmdConnect or cmdBind
proxyNetwork string // network between a proxy server and a client
proxyAddress string // proxy server address
// ProxyDial specifies the optional dial function for
// establishing the transport connection.
ProxyDial func(context.Context, string, string) (net.Conn, error)
// AuthMethods specifies the list of request authentication
// methods.
// If empty, SOCKS client requests only AuthMethodNotRequired.
AuthMethods []AuthMethod
// Authenticate specifies the optional authentication
// function. It must be non-nil when AuthMethods is not empty.
// It must return an error when the authentication is failed.
Authenticate func(context.Context, io.ReadWriter, AuthMethod) error
}
// DialContext connects to the provided address on the provided
// network.
//
// The returned error value may be a net.OpError. When the Op field of
// net.OpError contains "socks", the Source field contains a proxy
// server address and the Addr field contains a command target
// address.
//
// See func Dial of the net package of standard library for a
// description of the network and address parameters.
func (d *Dialer) DialContext(ctx context.Context, network, address string) (net.Conn, error) {
if err := d.validateTarget(network, address); err != nil {
proxy, dst, _ := d.pathAddrs(address)
return nil, &net.OpError{Op: d.cmd.String(), Net: network, Source: proxy, Addr: dst, Err: err}
}
if ctx == nil {
proxy, dst, _ := d.pathAddrs(address)
return nil, &net.OpError{Op: d.cmd.String(), Net: network, Source: proxy, Addr: dst, Err: errors.New("nil context")}
}
var err error
var c net.Conn
if d.ProxyDial != nil {
c, err = d.ProxyDial(ctx, d.proxyNetwork, d.proxyAddress)
} else {
var dd net.Dialer
c, err = dd.DialContext(ctx, d.proxyNetwork, d.proxyAddress)
}
if err != nil {
proxy, dst, _ := d.pathAddrs(address)
return nil, &net.OpError{Op: d.cmd.String(), Net: network, Source: proxy, Addr: dst, Err: err}
}
a, err := d.connect(ctx, c, address)
if err != nil {
c.Close()
proxy, dst, _ := d.pathAddrs(address)
return nil, &net.OpError{Op: d.cmd.String(), Net: network, Source: proxy, Addr: dst, Err: err}
}
return &Conn{Conn: c, boundAddr: a}, nil
}
// DialWithConn initiates a connection from SOCKS server to the target
// network and address using the connection c that is already
// connected to the SOCKS server.
//
// It returns the connection's local address assigned by the SOCKS
// server.
func (d *Dialer) DialWithConn(ctx context.Context, c net.Conn, network, address string) (net.Addr, error) {
if err := d.validateTarget(network, address); err != nil {
proxy, dst, _ := d.pathAddrs(address)
return nil, &net.OpError{Op: d.cmd.String(), Net: network, Source: proxy, Addr: dst, Err: err}
}
if ctx == nil {
proxy, dst, _ := d.pathAddrs(address)
return nil, &net.OpError{Op: d.cmd.String(), Net: network, Source: proxy, Addr: dst, Err: errors.New("nil context")}
}
a, err := d.connect(ctx, c, address)
if err != nil {
proxy, dst, _ := d.pathAddrs(address)
return nil, &net.OpError{Op: d.cmd.String(), Net: network, Source: proxy, Addr: dst, Err: err}
}
return a, nil
}
// Dial connects to the provided address on the provided network.
//
// Unlike DialContext, it returns a raw transport connection instead
// of a forward proxy connection.
//
// Deprecated: Use DialContext or DialWithConn instead.
func (d *Dialer) Dial(network, address string) (net.Conn, error) {
if err := d.validateTarget(network, address); err != nil {
proxy, dst, _ := d.pathAddrs(address)
return nil, &net.OpError{Op: d.cmd.String(), Net: network, Source: proxy, Addr: dst, Err: err}
}
var err error
var c net.Conn
if d.ProxyDial != nil {
c, err = d.ProxyDial(context.Background(), d.proxyNetwork, d.proxyAddress)
} else {
c, err = net.Dial(d.proxyNetwork, d.proxyAddress)
}
if err != nil {
proxy, dst, _ := d.pathAddrs(address)
return nil, &net.OpError{Op: d.cmd.String(), Net: network, Source: proxy, Addr: dst, Err: err}
}
if _, err := d.DialWithConn(context.Background(), c, network, address); err != nil {
c.Close()
return nil, err
}
return c, nil
}
func (d *Dialer) validateTarget(network, address string) error {
switch network {
case "tcp", "tcp6", "tcp4":
default:
return errors.New("network not implemented")
}
switch d.cmd {
case CmdConnect, cmdBind:
default:
return errors.New("command not implemented")
}
return nil
}
func (d *Dialer) pathAddrs(address string) (proxy, dst net.Addr, err error) {
for i, s := range []string{d.proxyAddress, address} {
host, port, err := splitHostPort(s)
if err != nil {
return nil, nil, err
}
a := &Addr{Port: port}
a.IP = net.ParseIP(host)
if a.IP == nil {
a.Name = host
}
if i == 0 {
proxy = a
} else {
dst = a
}
}
return
}
// NewDialer returns a new Dialer that dials through the provided
// proxy server's network and address.
func NewDialer(network, address string) *Dialer {
return &Dialer{proxyNetwork: network, proxyAddress: address, cmd: CmdConnect}
}
const (
authUsernamePasswordVersion = 0x01
authStatusSucceeded = 0x00
)
// UsernamePassword are the credentials for the username/password
// authentication method.
type UsernamePassword struct {
Username string
Password string
}
// Authenticate authenticates a pair of username and password with the
// proxy server.
func (up *UsernamePassword) Authenticate(ctx context.Context, rw io.ReadWriter, auth AuthMethod) error {
switch auth {
case AuthMethodNotRequired:
return nil
case AuthMethodUsernamePassword:
if len(up.Username) == 0 || len(up.Username) > 255 || len(up.Password) > 255 {
return errors.New("invalid username/password")
}
b := []byte{authUsernamePasswordVersion}
b = append(b, byte(len(up.Username)))
b = append(b, up.Username...)
b = append(b, byte(len(up.Password)))
b = append(b, up.Password...)
// TODO(mikio): handle IO deadlines and cancelation if
// necessary
if _, err := rw.Write(b); err != nil {
return err
}
if _, err := io.ReadFull(rw, b[:2]); err != nil {
return err
}
if b[0] != authUsernamePasswordVersion {
return errors.New("invalid username/password version")
}
if b[1] != authStatusSucceeded {
return errors.New("username/password authentication failed")
}
return nil
}
return errors.New("unsupported authentication method " + strconv.Itoa(int(auth)))
}

View file

@ -1,54 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proxy
import (
"context"
"net"
)
// A ContextDialer dials using a context.
type ContextDialer interface {
DialContext(ctx context.Context, network, address string) (net.Conn, error)
}
// Dial works like DialContext on net.Dialer but using a dialer returned by FromEnvironment.
//
// The passed ctx is only used for returning the Conn, not the lifetime of the Conn.
//
// Custom dialers (registered via RegisterDialerType) that do not implement ContextDialer
// can leak a goroutine for as long as it takes the underlying Dialer implementation to timeout.
//
// A Conn returned from a successful Dial after the context has been cancelled will be immediately closed.
func Dial(ctx context.Context, network, address string) (net.Conn, error) {
d := FromEnvironment()
if xd, ok := d.(ContextDialer); ok {
return xd.DialContext(ctx, network, address)
}
return dialContext(ctx, d, network, address)
}
// WARNING: this can leak a goroutine for as long as the underlying Dialer implementation takes to timeout
// A Conn returned from a successful Dial after the context has been cancelled will be immediately closed.
func dialContext(ctx context.Context, d Dialer, network, address string) (net.Conn, error) {
var (
conn net.Conn
done = make(chan struct{}, 1)
err error
)
go func() {
conn, err = d.Dial(network, address)
close(done)
if conn != nil && ctx.Err() != nil {
conn.Close()
}
}()
select {
case <-ctx.Done():
err = ctx.Err()
case <-done:
}
return conn, err
}

View file

@ -1,31 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proxy
import (
"context"
"net"
)
type direct struct{}
// Direct implements Dialer by making network connections directly using net.Dial or net.DialContext.
var Direct = direct{}
var (
_ Dialer = Direct
_ ContextDialer = Direct
)
// Dial directly invokes net.Dial with the supplied parameters.
func (direct) Dial(network, addr string) (net.Conn, error) {
return net.Dial(network, addr)
}
// DialContext instantiates a net.Dialer and invokes its DialContext receiver with the supplied parameters.
func (direct) DialContext(ctx context.Context, network, addr string) (net.Conn, error) {
var d net.Dialer
return d.DialContext(ctx, network, addr)
}

View file

@ -1,155 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proxy
import (
"context"
"net"
"strings"
)
// A PerHost directs connections to a default Dialer unless the host name
// requested matches one of a number of exceptions.
type PerHost struct {
def, bypass Dialer
bypassNetworks []*net.IPNet
bypassIPs []net.IP
bypassZones []string
bypassHosts []string
}
// NewPerHost returns a PerHost Dialer that directs connections to either
// defaultDialer or bypass, depending on whether the connection matches one of
// the configured rules.
func NewPerHost(defaultDialer, bypass Dialer) *PerHost {
return &PerHost{
def: defaultDialer,
bypass: bypass,
}
}
// Dial connects to the address addr on the given network through either
// defaultDialer or bypass.
func (p *PerHost) Dial(network, addr string) (c net.Conn, err error) {
host, _, err := net.SplitHostPort(addr)
if err != nil {
return nil, err
}
return p.dialerForRequest(host).Dial(network, addr)
}
// DialContext connects to the address addr on the given network through either
// defaultDialer or bypass.
func (p *PerHost) DialContext(ctx context.Context, network, addr string) (c net.Conn, err error) {
host, _, err := net.SplitHostPort(addr)
if err != nil {
return nil, err
}
d := p.dialerForRequest(host)
if x, ok := d.(ContextDialer); ok {
return x.DialContext(ctx, network, addr)
}
return dialContext(ctx, d, network, addr)
}
func (p *PerHost) dialerForRequest(host string) Dialer {
if ip := net.ParseIP(host); ip != nil {
for _, net := range p.bypassNetworks {
if net.Contains(ip) {
return p.bypass
}
}
for _, bypassIP := range p.bypassIPs {
if bypassIP.Equal(ip) {
return p.bypass
}
}
return p.def
}
for _, zone := range p.bypassZones {
if strings.HasSuffix(host, zone) {
return p.bypass
}
if host == zone[1:] {
// For a zone ".example.com", we match "example.com"
// too.
return p.bypass
}
}
for _, bypassHost := range p.bypassHosts {
if bypassHost == host {
return p.bypass
}
}
return p.def
}
// AddFromString parses a string that contains comma-separated values
// specifying hosts that should use the bypass proxy. Each value is either an
// IP address, a CIDR range, a zone (*.example.com) or a host name
// (localhost). A best effort is made to parse the string and errors are
// ignored.
func (p *PerHost) AddFromString(s string) {
hosts := strings.Split(s, ",")
for _, host := range hosts {
host = strings.TrimSpace(host)
if len(host) == 0 {
continue
}
if strings.Contains(host, "/") {
// We assume that it's a CIDR address like 127.0.0.0/8
if _, net, err := net.ParseCIDR(host); err == nil {
p.AddNetwork(net)
}
continue
}
if ip := net.ParseIP(host); ip != nil {
p.AddIP(ip)
continue
}
if strings.HasPrefix(host, "*.") {
p.AddZone(host[1:])
continue
}
p.AddHost(host)
}
}
// AddIP specifies an IP address that will use the bypass proxy. Note that
// this will only take effect if a literal IP address is dialed. A connection
// to a named host will never match an IP.
func (p *PerHost) AddIP(ip net.IP) {
p.bypassIPs = append(p.bypassIPs, ip)
}
// AddNetwork specifies an IP range that will use the bypass proxy. Note that
// this will only take effect if a literal IP address is dialed. A connection
// to a named host will never match.
func (p *PerHost) AddNetwork(net *net.IPNet) {
p.bypassNetworks = append(p.bypassNetworks, net)
}
// AddZone specifies a DNS suffix that will use the bypass proxy. A zone of
// "example.com" matches "example.com" and all of its subdomains.
func (p *PerHost) AddZone(zone string) {
if strings.HasSuffix(zone, ".") {
zone = zone[:len(zone)-1]
}
if !strings.HasPrefix(zone, ".") {
zone = "." + zone
}
p.bypassZones = append(p.bypassZones, zone)
}
// AddHost specifies a host name that will use the bypass proxy.
func (p *PerHost) AddHost(host string) {
if strings.HasSuffix(host, ".") {
host = host[:len(host)-1]
}
p.bypassHosts = append(p.bypassHosts, host)
}

View file

@ -1,149 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package proxy provides support for a variety of protocols to proxy network
// data.
package proxy // import "golang.org/x/net/proxy"
import (
"errors"
"net"
"net/url"
"os"
"sync"
)
// A Dialer is a means to establish a connection.
// Custom dialers should also implement ContextDialer.
type Dialer interface {
// Dial connects to the given address via the proxy.
Dial(network, addr string) (c net.Conn, err error)
}
// Auth contains authentication parameters that specific Dialers may require.
type Auth struct {
User, Password string
}
// FromEnvironment returns the dialer specified by the proxy-related
// variables in the environment and makes underlying connections
// directly.
func FromEnvironment() Dialer {
return FromEnvironmentUsing(Direct)
}
// FromEnvironmentUsing returns the dialer specify by the proxy-related
// variables in the environment and makes underlying connections
// using the provided forwarding Dialer (for instance, a *net.Dialer
// with desired configuration).
func FromEnvironmentUsing(forward Dialer) Dialer {
allProxy := allProxyEnv.Get()
if len(allProxy) == 0 {
return forward
}
proxyURL, err := url.Parse(allProxy)
if err != nil {
return forward
}
proxy, err := FromURL(proxyURL, forward)
if err != nil {
return forward
}
noProxy := noProxyEnv.Get()
if len(noProxy) == 0 {
return proxy
}
perHost := NewPerHost(proxy, forward)
perHost.AddFromString(noProxy)
return perHost
}
// proxySchemes is a map from URL schemes to a function that creates a Dialer
// from a URL with such a scheme.
var proxySchemes map[string]func(*url.URL, Dialer) (Dialer, error)
// RegisterDialerType takes a URL scheme and a function to generate Dialers from
// a URL with that scheme and a forwarding Dialer. Registered schemes are used
// by FromURL.
func RegisterDialerType(scheme string, f func(*url.URL, Dialer) (Dialer, error)) {
if proxySchemes == nil {
proxySchemes = make(map[string]func(*url.URL, Dialer) (Dialer, error))
}
proxySchemes[scheme] = f
}
// FromURL returns a Dialer given a URL specification and an underlying
// Dialer for it to make network requests.
func FromURL(u *url.URL, forward Dialer) (Dialer, error) {
var auth *Auth
if u.User != nil {
auth = new(Auth)
auth.User = u.User.Username()
if p, ok := u.User.Password(); ok {
auth.Password = p
}
}
switch u.Scheme {
case "socks5", "socks5h":
addr := u.Hostname()
port := u.Port()
if port == "" {
port = "1080"
}
return SOCKS5("tcp", net.JoinHostPort(addr, port), auth, forward)
}
// If the scheme doesn't match any of the built-in schemes, see if it
// was registered by another package.
if proxySchemes != nil {
if f, ok := proxySchemes[u.Scheme]; ok {
return f(u, forward)
}
}
return nil, errors.New("proxy: unknown scheme: " + u.Scheme)
}
var (
allProxyEnv = &envOnce{
names: []string{"ALL_PROXY", "all_proxy"},
}
noProxyEnv = &envOnce{
names: []string{"NO_PROXY", "no_proxy"},
}
)
// envOnce looks up an environment variable (optionally by multiple
// names) once. It mitigates expensive lookups on some platforms
// (e.g. Windows).
// (Borrowed from net/http/transport.go)
type envOnce struct {
names []string
once sync.Once
val string
}
func (e *envOnce) Get() string {
e.once.Do(e.init)
return e.val
}
func (e *envOnce) init() {
for _, n := range e.names {
e.val = os.Getenv(n)
if e.val != "" {
return
}
}
}
// reset is used by tests
func (e *envOnce) reset() {
e.once = sync.Once{}
e.val = ""
}

View file

@ -1,42 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proxy
import (
"context"
"net"
"golang.org/x/net/internal/socks"
)
// SOCKS5 returns a Dialer that makes SOCKSv5 connections to the given
// address with an optional username and password.
// See RFC 1928 and RFC 1929.
func SOCKS5(network, address string, auth *Auth, forward Dialer) (Dialer, error) {
d := socks.NewDialer(network, address)
if forward != nil {
if f, ok := forward.(ContextDialer); ok {
d.ProxyDial = func(ctx context.Context, network string, address string) (net.Conn, error) {
return f.DialContext(ctx, network, address)
}
} else {
d.ProxyDial = func(ctx context.Context, network string, address string) (net.Conn, error) {
return dialContext(ctx, forward, network, address)
}
}
}
if auth != nil {
up := socks.UsernamePassword{
Username: auth.User,
Password: auth.Password,
}
d.AuthMethods = []socks.AuthMethod{
socks.AuthMethodNotRequired,
socks.AuthMethodUsernamePassword,
}
d.Authenticate = up.Authenticate
}
return d, nil
}

7
vendor/modules.txt vendored
View file

@ -99,11 +99,6 @@ github.com/VictoriaMetrics/easyproto
# github.com/VictoriaMetrics/fastcache v1.12.2
## explicit; go 1.13
github.com/VictoriaMetrics/fastcache
# github.com/VictoriaMetrics/fasthttp v1.2.0
## explicit; go 1.19
github.com/VictoriaMetrics/fasthttp
github.com/VictoriaMetrics/fasthttp/fasthttputil
github.com/VictoriaMetrics/fasthttp/stackless
# github.com/VictoriaMetrics/metrics v1.31.0
## explicit; go 1.17
github.com/VictoriaMetrics/metrics
@ -674,9 +669,7 @@ golang.org/x/net/http/httpproxy
golang.org/x/net/http2
golang.org/x/net/http2/hpack
golang.org/x/net/idna
golang.org/x/net/internal/socks
golang.org/x/net/internal/timeseries
golang.org/x/net/proxy
golang.org/x/net/trace
# golang.org/x/oauth2 v0.16.0
## explicit; go 1.18